Dec 03 09:12:10 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 09:12:10 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 09:12:10 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 09:12:11 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 09:12:12 crc kubenswrapper[4856]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.040247 4856 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046681 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046731 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046741 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046750 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046759 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046770 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046779 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046786 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046795 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046828 4856 feature_gate.go:330] unrecognized feature gate: Example Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046836 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046845 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046853 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046861 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046869 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046876 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046885 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046893 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046901 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046908 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046916 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046923 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046932 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046940 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046947 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046955 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046963 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046974 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046985 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.046994 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047002 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047010 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047019 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047027 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047048 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047056 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047064 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047072 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047080 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047087 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047095 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047103 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047111 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047119 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047126 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047138 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047148 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047157 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047166 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047217 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047228 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047239 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047249 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047259 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047267 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047275 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047283 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047291 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047298 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047306 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047314 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047321 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047329 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047336 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047344 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047352 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047360 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047368 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047376 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047386 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.047394 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047543 4856 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047561 4856 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047575 4856 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047586 4856 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047598 4856 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047608 4856 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047620 4856 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047632 4856 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047641 4856 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047651 4856 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047660 4856 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047672 4856 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047681 4856 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047690 4856 flags.go:64] FLAG: --cgroup-root="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047699 4856 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047708 4856 flags.go:64] FLAG: --client-ca-file="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047717 4856 flags.go:64] FLAG: --cloud-config="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047726 4856 flags.go:64] FLAG: --cloud-provider="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047735 4856 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047747 4856 flags.go:64] FLAG: --cluster-domain="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047756 4856 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047765 4856 flags.go:64] FLAG: --config-dir="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047774 4856 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047783 4856 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047794 4856 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047827 4856 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047837 4856 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047846 4856 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047856 4856 flags.go:64] FLAG: --contention-profiling="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047865 4856 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047874 4856 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047884 4856 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047893 4856 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047905 4856 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047914 4856 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047923 4856 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047932 4856 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047941 4856 flags.go:64] FLAG: --enable-server="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047950 4856 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047962 4856 flags.go:64] FLAG: --event-burst="100" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047971 4856 flags.go:64] FLAG: --event-qps="50" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047980 4856 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047990 4856 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.047999 4856 flags.go:64] FLAG: --eviction-hard="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048010 4856 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048020 4856 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048029 4856 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048040 4856 flags.go:64] FLAG: --eviction-soft="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048049 4856 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048058 4856 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048068 4856 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048077 4856 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048085 4856 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048094 4856 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048103 4856 flags.go:64] FLAG: --feature-gates="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048114 4856 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048123 4856 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048133 4856 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048142 4856 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048151 4856 flags.go:64] FLAG: --healthz-port="10248" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048160 4856 flags.go:64] FLAG: --help="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048169 4856 flags.go:64] FLAG: --hostname-override="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048178 4856 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048188 4856 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048196 4856 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048205 4856 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048214 4856 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048223 4856 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048233 4856 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048242 4856 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048251 4856 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048261 4856 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048270 4856 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048279 4856 flags.go:64] FLAG: --kube-reserved="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048288 4856 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048297 4856 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048309 4856 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048318 4856 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048329 4856 flags.go:64] FLAG: --lock-file="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048338 4856 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048348 4856 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048358 4856 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048381 4856 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048391 4856 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048401 4856 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048410 4856 flags.go:64] FLAG: --logging-format="text" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048419 4856 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048429 4856 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048439 4856 flags.go:64] FLAG: --manifest-url="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048448 4856 flags.go:64] FLAG: --manifest-url-header="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048460 4856 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048469 4856 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048480 4856 flags.go:64] FLAG: --max-pods="110" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048489 4856 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048499 4856 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048508 4856 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048517 4856 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048526 4856 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048535 4856 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048545 4856 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048565 4856 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048574 4856 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048583 4856 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048592 4856 flags.go:64] FLAG: --pod-cidr="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048601 4856 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048616 4856 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048625 4856 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048634 4856 flags.go:64] FLAG: --pods-per-core="0" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048643 4856 flags.go:64] FLAG: --port="10250" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048652 4856 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048661 4856 flags.go:64] FLAG: --provider-id="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048670 4856 flags.go:64] FLAG: --qos-reserved="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048679 4856 flags.go:64] FLAG: --read-only-port="10255" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048689 4856 flags.go:64] FLAG: --register-node="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048698 4856 flags.go:64] FLAG: --register-schedulable="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048708 4856 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048723 4856 flags.go:64] FLAG: --registry-burst="10" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048732 4856 flags.go:64] FLAG: --registry-qps="5" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048741 4856 flags.go:64] FLAG: --reserved-cpus="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048751 4856 flags.go:64] FLAG: --reserved-memory="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048763 4856 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048773 4856 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048782 4856 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048791 4856 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048800 4856 flags.go:64] FLAG: --runonce="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048831 4856 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048840 4856 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048850 4856 flags.go:64] FLAG: --seccomp-default="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048859 4856 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048868 4856 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048877 4856 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048886 4856 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048897 4856 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048906 4856 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048915 4856 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048924 4856 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048933 4856 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048943 4856 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048952 4856 flags.go:64] FLAG: --system-cgroups="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048961 4856 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048975 4856 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048984 4856 flags.go:64] FLAG: --tls-cert-file="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.048993 4856 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049004 4856 flags.go:64] FLAG: --tls-min-version="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049014 4856 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049023 4856 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049032 4856 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049041 4856 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049050 4856 flags.go:64] FLAG: --v="2" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049062 4856 flags.go:64] FLAG: --version="false" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049073 4856 flags.go:64] FLAG: --vmodule="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049084 4856 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049094 4856 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049292 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049303 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049313 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049321 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049330 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049338 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049347 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049355 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049362 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049370 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049378 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049386 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049394 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049404 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049414 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049423 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049432 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049441 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049449 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049457 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049465 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049475 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049486 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049495 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049505 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049516 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049525 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049536 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049546 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049555 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049566 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049574 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049582 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049592 4856 feature_gate.go:330] unrecognized feature gate: Example Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049601 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049609 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049617 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049625 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049635 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049643 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049651 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049659 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049666 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049674 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049682 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049690 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049697 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049705 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049714 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049722 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049729 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049737 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049745 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049753 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049761 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049768 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049776 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049784 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049792 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049800 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049832 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049840 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049848 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049856 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049864 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049872 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049880 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049888 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049896 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049904 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.049911 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.049925 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.058982 4856 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.059082 4856 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059181 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059257 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059263 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059270 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059276 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059282 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059289 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059295 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059300 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059305 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059309 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059314 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059319 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059326 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059335 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059341 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059346 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059351 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059355 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059360 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059365 4856 feature_gate.go:330] unrecognized feature gate: Example Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059369 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059374 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059379 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059384 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059388 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059393 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059398 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059402 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059409 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059413 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059418 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059423 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059429 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059434 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059439 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059445 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059451 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059458 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059465 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059471 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059478 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059486 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059492 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059498 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059504 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059509 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059515 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059520 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059525 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059530 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059535 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059539 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059544 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059549 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059554 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059559 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059565 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059577 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059583 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059589 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059597 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059603 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059608 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059613 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059620 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059625 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059629 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059633 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059639 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059643 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.059653 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059855 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059866 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059871 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059875 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059880 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059885 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059889 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059893 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059897 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059904 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059907 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059912 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059915 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059919 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059924 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059927 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059931 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059936 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059941 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059946 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059949 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059953 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059957 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059961 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059965 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059969 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059973 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059977 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059980 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059985 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059989 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059993 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.059997 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060001 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060006 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060010 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060014 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060018 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060023 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060028 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060033 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060038 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060042 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060048 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060054 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060059 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060063 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060068 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060072 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060076 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060080 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060085 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060089 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060093 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060097 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060101 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060105 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060109 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060113 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060117 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060121 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060124 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060128 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060132 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060136 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060141 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060144 4856 feature_gate.go:330] unrecognized feature gate: Example Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060148 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060151 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060157 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.060162 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.060215 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.060465 4856 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.063451 4856 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.063551 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.064039 4856 server.go:997] "Starting client certificate rotation" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.064063 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.064269 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 12:45:00.036088888 +0000 UTC Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.064392 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 795h32m47.971701422s for next certificate rotation Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.076197 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.077654 4856 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.088845 4856 log.go:25] "Validated CRI v1 runtime API" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.110824 4856 log.go:25] "Validated CRI v1 image API" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.112519 4856 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.114927 4856 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-09-07-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.114956 4856 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.128631 4856 manager.go:217] Machine: {Timestamp:2025-12-03 09:12:12.127372534 +0000 UTC m=+0.310264855 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d39e84ae-b2cf-4078-935b-5fb0be0ab617 BootID:5537c787-d762-4b55-bd38-ab8197889b01 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c8:e7:fa Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c8:e7:fa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a7:12:c1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1e:24:30 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e5:ca:cf Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3e:0e:2d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:c6:ac:25:62:be Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:d4:b7:1f:c7:ce Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.128919 4856 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.129067 4856 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.129758 4856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130102 4856 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130156 4856 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130492 4856 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130510 4856 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130855 4856 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.130907 4856 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.131433 4856 state_mem.go:36] "Initialized new in-memory state store" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.131672 4856 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.132500 4856 kubelet.go:418] "Attempting to sync node with API server" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.132525 4856 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.132557 4856 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.132575 4856 kubelet.go:324] "Adding apiserver pod source" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.132592 4856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.136031 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.136116 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.136129 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.136241 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.143128 4856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.143577 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.144406 4856 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.144989 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145015 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145025 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145034 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145047 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145055 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145063 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145076 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145084 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145093 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145105 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145113 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.145394 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.146206 4856 server.go:1280] "Started kubelet" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.146338 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.146673 4856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.147056 4856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.147752 4856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 09:12:12 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.149514 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.149620 4856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.149679 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:39:45.210567609 +0000 UTC Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.149746 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 576h27m33.060824244s for next certificate rotation Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.149998 4856 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.150018 4856 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.150135 4856 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.150524 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.150741 4856 factory.go:55] Registering systemd factory Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.150764 4856 factory.go:221] Registration of the systemd container factory successfully Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.155214 4856 factory.go:153] Registering CRI-O factory Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.155731 4856 factory.go:221] Registration of the crio container factory successfully Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.155837 4856 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.155864 4856 factory.go:103] Registering Raw factory Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.155880 4856 manager.go:1196] Started watching for new ooms in manager Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.156601 4856 manager.go:319] Starting recovery of all containers Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.155281 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.151283 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.159723 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.159749 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187da99fa5d4b8ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 09:12:12.146178254 +0000 UTC m=+0.329070555,LastTimestamp:2025-12-03 09:12:12.146178254 +0000 UTC m=+0.329070555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.161340 4856 server.go:460] "Adding debug handlers to kubelet server" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163565 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163614 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163624 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163633 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163641 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163650 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163659 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163668 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163678 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163688 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163698 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163707 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163716 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163729 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163753 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163762 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163771 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163780 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163789 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163797 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163823 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163833 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163842 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163868 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163877 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163899 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163910 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163921 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163929 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163937 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163946 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163955 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163964 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163973 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163982 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.163991 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164000 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164008 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164017 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164026 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164035 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164045 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164055 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164067 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164077 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164086 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164097 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164107 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164116 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164125 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164135 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164143 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164156 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164166 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164176 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164186 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164196 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164206 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164216 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164224 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164232 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164241 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164251 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164259 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164269 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164278 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164287 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164296 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164306 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164314 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164323 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164333 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164341 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164350 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164357 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164367 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164378 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164387 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164401 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164411 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164419 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164428 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164438 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164449 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164457 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164465 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164473 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164484 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164492 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164501 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164511 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164519 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164528 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164537 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164545 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164554 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164562 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164571 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164582 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164590 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164599 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164609 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164617 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164627 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164639 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164648 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164657 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164667 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164677 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164686 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164697 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164706 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164714 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164724 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164732 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164741 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164749 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164759 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164768 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164777 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164786 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164795 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164816 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164825 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164833 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164842 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164851 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164860 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164869 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164877 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164886 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164895 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164906 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164916 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164925 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164935 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164946 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164955 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164965 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164974 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164983 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.164993 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165008 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165016 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165025 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165035 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165043 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.165053 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166384 4856 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166405 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166417 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166427 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166439 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166448 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166459 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166476 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166485 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166493 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166502 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166511 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166519 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166527 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166536 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166544 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166554 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166564 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166572 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166582 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166591 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166600 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166610 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166621 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166630 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166640 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166651 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166660 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166671 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166681 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166691 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166701 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166712 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166721 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166732 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166743 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166753 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166763 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166774 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166785 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166795 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166817 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166827 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166838 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166847 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166858 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166868 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166878 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166887 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166897 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166908 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166918 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166929 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166939 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166950 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166960 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166970 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166980 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166989 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.166999 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.167008 4856 reconstruct.go:97] "Volume reconstruction finished" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.167015 4856 reconciler.go:26] "Reconciler: start to sync state" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.175711 4856 manager.go:324] Recovery completed Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.184637 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.191687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.191734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.191745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.193362 4856 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.193391 4856 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.193425 4856 state_mem.go:36] "Initialized new in-memory state store" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.251585 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.351865 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.357980 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.452779 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.553909 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.654386 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.685613 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.687499 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.687622 4856 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.687700 4856 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.687834 4856 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 09:12:12 crc kubenswrapper[4856]: W1203 09:12:12.688869 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.689034 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.753036 4856 policy_none.go:49] "None policy: Start" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.754171 4856 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.754205 4856 state_mem.go:35] "Initializing new in-memory state store" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.754757 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.758549 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.788002 4856 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809047 4856 manager.go:334] "Starting Device Plugin manager" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809127 4856 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809147 4856 server.go:79] "Starting device plugin registration server" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809571 4856 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809593 4856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809857 4856 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809936 4856 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.809944 4856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.815439 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.910653 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.912524 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.912589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.912599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.912629 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:12 crc kubenswrapper[4856]: E1203 09:12:12.913375 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.988853 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.989077 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.990853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.990936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.990954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.991259 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.991662 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.991733 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.992837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.993077 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.993231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.993292 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994432 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.994956 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995004 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995738 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995911 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.995990 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.996533 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.996553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.996564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997648 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.997688 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.998655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.998683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:12 crc kubenswrapper[4856]: I1203 09:12:12.998696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077882 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077923 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077959 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.077994 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078030 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078114 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078149 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.078171 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.113634 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.114940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.115008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.115025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.115052 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.115768 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.146999 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.179124 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.179218 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180281 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180354 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180409 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180431 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180480 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180518 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180525 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180493 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180545 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180507 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180563 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180632 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180651 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180673 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180701 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180737 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.180850 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.244458 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.244592 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.329621 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.339763 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.344078 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.344156 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.350978 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2f7a7bb2e7ec2bf15c97860cb6b92405d5cbd9618b3548313f114edc4951ffd8 WatchSource:0}: Error finding container 2f7a7bb2e7ec2bf15c97860cb6b92405d5cbd9618b3548313f114edc4951ffd8: Status 404 returned error can't find the container with id 2f7a7bb2e7ec2bf15c97860cb6b92405d5cbd9618b3548313f114edc4951ffd8 Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.356197 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.359904 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1a5d8bf55db3e58333f257ad13e995271aef3c60a8b1ee39979e79be191e2d6e WatchSource:0}: Error finding container 1a5d8bf55db3e58333f257ad13e995271aef3c60a8b1ee39979e79be191e2d6e: Status 404 returned error can't find the container with id 1a5d8bf55db3e58333f257ad13e995271aef3c60a8b1ee39979e79be191e2d6e Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.375669 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.383994 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.384368 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8cd8cd4335f7bcc48dc78f5f5727a52e72a7a43845896a7095cdeacaf4bff6c4 WatchSource:0}: Error finding container 8cd8cd4335f7bcc48dc78f5f5727a52e72a7a43845896a7095cdeacaf4bff6c4: Status 404 returned error can't find the container with id 8cd8cd4335f7bcc48dc78f5f5727a52e72a7a43845896a7095cdeacaf4bff6c4 Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.402272 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c211b0f2b74a8271a77803da30311bbe238bb7c94ace4b6ee9e24455b80ff16c WatchSource:0}: Error finding container c211b0f2b74a8271a77803da30311bbe238bb7c94ace4b6ee9e24455b80ff16c: Status 404 returned error can't find the container with id c211b0f2b74a8271a77803da30311bbe238bb7c94ace4b6ee9e24455b80ff16c Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.403346 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3b2cffad75c8c7fa6cfaaebf33cdaf4f270a61e98afbbb846373fc264d3eccc6 WatchSource:0}: Error finding container 3b2cffad75c8c7fa6cfaaebf33cdaf4f270a61e98afbbb846373fc264d3eccc6: Status 404 returned error can't find the container with id 3b2cffad75c8c7fa6cfaaebf33cdaf4f270a61e98afbbb846373fc264d3eccc6 Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.516383 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.517904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.518036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.518058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.518086 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.518667 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.561174 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.694745 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8cd8cd4335f7bcc48dc78f5f5727a52e72a7a43845896a7095cdeacaf4bff6c4"} Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.695798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1a5d8bf55db3e58333f257ad13e995271aef3c60a8b1ee39979e79be191e2d6e"} Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.696748 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2f7a7bb2e7ec2bf15c97860cb6b92405d5cbd9618b3548313f114edc4951ffd8"} Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.697597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b2cffad75c8c7fa6cfaaebf33cdaf4f270a61e98afbbb846373fc264d3eccc6"} Dec 03 09:12:13 crc kubenswrapper[4856]: I1203 09:12:13.698444 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c211b0f2b74a8271a77803da30311bbe238bb7c94ace4b6ee9e24455b80ff16c"} Dec 03 09:12:13 crc kubenswrapper[4856]: W1203 09:12:13.760125 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:13 crc kubenswrapper[4856]: E1203 09:12:13.760257 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.147362 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.337634 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.339196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.339262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.339290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.339337 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:14 crc kubenswrapper[4856]: E1203 09:12:14.339903 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 09:12:14 crc kubenswrapper[4856]: E1203 09:12:14.464893 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187da99fa5d4b8ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 09:12:12.146178254 +0000 UTC m=+0.329070555,LastTimestamp:2025-12-03 09:12:12.146178254 +0000 UTC m=+0.329070555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.704605 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e" exitCode=0 Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.704701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.704754 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.706211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.706259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.706272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.707034 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363" exitCode=0 Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.707101 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.707222 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.708187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.708219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.708227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.711400 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.713950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.713991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.714004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.715003 4856 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f" exitCode=0 Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.715298 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.715293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.716432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.716455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.716466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.718331 4856 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83" exitCode=0 Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.718368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.718444 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.719342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.719377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.719388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.722973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.723027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4"} Dec 03 09:12:14 crc kubenswrapper[4856]: I1203 09:12:14.723042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.147700 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:15 crc kubenswrapper[4856]: E1203 09:12:15.163065 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="3.2s" Dec 03 09:12:15 crc kubenswrapper[4856]: W1203 09:12:15.625568 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:15 crc kubenswrapper[4856]: E1203 09:12:15.625733 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.733299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e61d51f69c6aae8a8d7d305e9b32aa0de96e18ad39db268a555d2b575b7ee69d"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.733490 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.735490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.735540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.735557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.736056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.739904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.739961 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.740782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.740955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.740977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.741502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.742739 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903" exitCode=0 Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.742777 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903"} Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.742885 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.743854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.743891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.743900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.940268 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.941559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.941594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.941605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:15 crc kubenswrapper[4856]: I1203 09:12:15.941629 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:15 crc kubenswrapper[4856]: E1203 09:12:15.942327 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 09:12:16 crc kubenswrapper[4856]: W1203 09:12:16.110893 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:16 crc kubenswrapper[4856]: E1203 09:12:16.110981 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:16 crc kubenswrapper[4856]: W1203 09:12:16.124892 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:16 crc kubenswrapper[4856]: E1203 09:12:16.124983 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.147507 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:16 crc kubenswrapper[4856]: W1203 09:12:16.430482 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:16 crc kubenswrapper[4856]: E1203 09:12:16.430896 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.748644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541"} Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.748722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34"} Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.752636 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8" exitCode=0 Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.752707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8"} Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.752757 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.753573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.753600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.753610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.755964 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033"} Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756020 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756028 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98"} Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756052 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756137 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.756975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:16 crc kubenswrapper[4856]: I1203 09:12:16.757565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.159428 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.760232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d"} Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.761896 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.761943 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.762433 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c"} Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.762769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.762795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:17 crc kubenswrapper[4856]: I1203 09:12:17.762822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.147454 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 09:12:18 crc kubenswrapper[4856]: E1203 09:12:18.363752 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="6.4s" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.657883 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.658030 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.659171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.659237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.659248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.766957 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207"} Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.767121 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.767914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.767940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.767952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:18 crc kubenswrapper[4856]: I1203 09:12:18.770467 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118"} Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.143229 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.144912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.144972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.144984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.145019 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.723615 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.723873 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.725123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.725176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.725188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.781864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a"} Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.781941 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15"} Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.781939 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.782025 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.783312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.783360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:19 crc kubenswrapper[4856]: I1203 09:12:19.783374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.594380 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.684422 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.684590 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.685994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.686039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.686052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.788752 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb"} Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.788771 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.788838 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.788876 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790137 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:20 crc kubenswrapper[4856]: I1203 09:12:20.790424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:21 crc kubenswrapper[4856]: I1203 09:12:21.580553 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 09:12:21 crc kubenswrapper[4856]: I1203 09:12:21.791368 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:21 crc kubenswrapper[4856]: I1203 09:12:21.792523 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:21 crc kubenswrapper[4856]: I1203 09:12:21.792567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:21 crc kubenswrapper[4856]: I1203 09:12:21.792577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.362601 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.362724 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.363863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.363892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.363902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.367909 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.794106 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.794125 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795294 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795426 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.795441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:22 crc kubenswrapper[4856]: E1203 09:12:22.815693 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.872064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.872267 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.873737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.873860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:22 crc kubenswrapper[4856]: I1203 09:12:22.873875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.324018 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.796479 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.797565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.797613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.797623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.922892 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.923132 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.924479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.924537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.924551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:23 crc kubenswrapper[4856]: I1203 09:12:23.930850 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.671706 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.672006 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.673059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.673105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.673117 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.798957 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.799706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.799738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:24 crc kubenswrapper[4856]: I1203 09:12:24.799750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:26 crc kubenswrapper[4856]: I1203 09:12:26.923568 4856 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 09:12:26 crc kubenswrapper[4856]: I1203 09:12:26.923646 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 09:12:29 crc kubenswrapper[4856]: E1203 09:12:29.146191 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.147237 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 09:12:29 crc kubenswrapper[4856]: W1203 09:12:29.304897 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.305044 4856 trace.go:236] Trace[1631924943]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 09:12:19.303) (total time: 10001ms): Dec 03 09:12:29 crc kubenswrapper[4856]: Trace[1631924943]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:12:29.304) Dec 03 09:12:29 crc kubenswrapper[4856]: Trace[1631924943]: [10.001221888s] [10.001221888s] END Dec 03 09:12:29 crc kubenswrapper[4856]: E1203 09:12:29.305079 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.674929 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.675031 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.680507 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.680603 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.812302 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.814146 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207" exitCode=255 Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.814183 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207"} Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.814349 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.814995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.815022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.815034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:29 crc kubenswrapper[4856]: I1203 09:12:29.815529 4856 scope.go:117] "RemoveContainer" containerID="8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207" Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.820013 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.822924 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551"} Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.823361 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.824977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.825851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:30 crc kubenswrapper[4856]: I1203 09:12:30.826037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.669707 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.669931 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.671048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.671079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.671090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.690717 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.825350 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.826502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.826557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:31 crc kubenswrapper[4856]: I1203 09:12:31.826574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:32 crc kubenswrapper[4856]: E1203 09:12:32.815908 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 09:12:32 crc kubenswrapper[4856]: I1203 09:12:32.872437 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:32 crc kubenswrapper[4856]: I1203 09:12:32.872652 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:32 crc kubenswrapper[4856]: I1203 09:12:32.874024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:32 crc kubenswrapper[4856]: I1203 09:12:32.874066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:32 crc kubenswrapper[4856]: I1203 09:12:32.874080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.332126 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.832640 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.833692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.833726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.833737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:33 crc kubenswrapper[4856]: I1203 09:12:33.837774 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.730963 4856 trace.go:236] Trace[525893330]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 09:12:22.389) (total time: 12341ms): Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[525893330]: ---"Objects listed" error: 12341ms (09:12:34.730) Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[525893330]: [12.341061081s] [12.341061081s] END Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.731007 4856 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.734358 4856 trace.go:236] Trace[215132039]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 09:12:22.102) (total time: 12631ms): Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[215132039]: ---"Objects listed" error: 12631ms (09:12:34.734) Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[215132039]: [12.631990877s] [12.631990877s] END Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.734398 4856 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.735550 4856 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.735881 4856 trace.go:236] Trace[1517063054]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 09:12:20.576) (total time: 14159ms): Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[1517063054]: ---"Objects listed" error: 14159ms (09:12:34.735) Dec 03 09:12:34 crc kubenswrapper[4856]: Trace[1517063054]: [14.159462342s] [14.159462342s] END Dec 03 09:12:34 crc kubenswrapper[4856]: I1203 09:12:34.735908 4856 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.196493 4856 apiserver.go:52] "Watching apiserver" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201166 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201390 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201743 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201814 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201828 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201754 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.201896 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.201952 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.202068 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.202157 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.202171 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.203873 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.204002 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.206517 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.206560 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.206835 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.207381 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.207755 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.207928 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.208148 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.230764 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.249223 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.251767 4856 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.261107 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.272478 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.284525 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.295918 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.310030 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.327867 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.342184 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.355117 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.368518 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371165 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371293 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371318 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371361 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371382 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.371977 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.372099 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.372360 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.372795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.373268 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.373281 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.373270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.373352 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.373915 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374047 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374399 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375014 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374316 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374312 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374640 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.374958 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375125 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375557 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375626 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375699 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.375698 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376056 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376066 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376078 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376100 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376139 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376172 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376194 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376221 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376347 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376517 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376566 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376614 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376702 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376721 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377194 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377215 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377294 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377323 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377345 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376739 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.376796 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377003 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377084 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377327 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377234 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377570 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377757 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377876 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377912 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377938 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377948 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378125 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378145 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378401 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.377501 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378479 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378511 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378563 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378604 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378631 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378656 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378680 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378703 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378726 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378747 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378675 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378753 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378772 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378839 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378862 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378888 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378911 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.378990 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379014 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379077 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379160 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379214 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379249 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379262 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379393 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379504 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379511 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379644 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379686 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379728 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379758 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379829 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379869 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379901 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379932 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379960 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379985 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380073 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380091 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380111 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380129 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380176 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380196 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380214 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380236 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380261 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380279 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380296 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380314 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380335 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380358 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380378 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380400 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380423 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380447 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380473 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380497 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380527 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380556 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380610 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380637 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380666 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380694 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380715 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380737 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380761 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380784 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380842 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380877 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380901 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380934 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380964 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380989 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381019 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381047 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.379832 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383391 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380028 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380103 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380218 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380334 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380509 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380637 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380689 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380832 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.380941 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.381091 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:12:35.881060431 +0000 UTC m=+24.063952732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381343 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381466 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381504 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.381858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382004 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382205 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382343 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382328 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382393 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382628 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382760 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382791 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.382878 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383184 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383335 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383361 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383496 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383884 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.383961 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384085 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384432 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384649 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384903 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384911 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.384986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385045 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385047 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385516 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386348 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386464 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386554 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386730 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386857 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.386944 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387127 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387170 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387304 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.385068 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387416 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387443 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387468 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387492 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387518 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387541 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387565 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387627 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387659 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387668 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387716 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387764 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387829 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387852 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387878 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387902 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387930 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387958 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.387983 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388008 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388030 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388079 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388101 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388123 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388146 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388170 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388195 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388201 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388220 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388308 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388361 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388392 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388418 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388445 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388474 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388508 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388533 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388583 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388616 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388641 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388695 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388717 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388739 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388788 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388829 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388857 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388882 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388906 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388930 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388957 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389006 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389030 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389054 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389109 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389137 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389164 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389193 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389218 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389243 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389291 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389326 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389405 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389425 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389469 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389561 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389594 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389657 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389690 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389720 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389745 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389771 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389858 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390060 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390091 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390125 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390275 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390299 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390314 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390325 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390340 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390352 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390363 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390374 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390387 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390400 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390415 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390427 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390437 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390449 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390464 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390477 4856 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390487 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390497 4856 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390507 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390516 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390529 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390540 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390552 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390564 4856 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390576 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390588 4856 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390605 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390617 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390631 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390650 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390663 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390675 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390688 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390701 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390712 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390724 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390735 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390748 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390761 4856 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390774 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390789 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390819 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390832 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390845 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390858 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390867 4856 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390878 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390890 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390901 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390914 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390926 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390938 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390951 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390964 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390976 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390990 4856 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391004 4856 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391018 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391036 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391049 4856 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391063 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.395237 4856 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.395952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388474 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388544 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.388602 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389036 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389053 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389095 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389261 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389576 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389588 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.389635 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390432 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390543 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390939 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.390982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391217 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391300 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391518 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391528 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391649 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.391992 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392058 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392228 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392514 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392537 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392708 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.392881 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.393033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.393279 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.393297 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.394481 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.394498 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.394526 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.394765 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.394795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.395278 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.396033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.396330 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.396335 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.396546 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.396776 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.397614 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.398276 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.398773 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.399276 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.399552 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.400178 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.400478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.400633 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.400957 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.400647 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.402477 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.403120 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.406946 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.409878 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.410077 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.410182 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.410524 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.410727 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.410836 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.411215 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.411358 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:35.911333812 +0000 UTC m=+24.094226323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.411639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.411775 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.412455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.412586 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.412692 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:35.912679907 +0000 UTC m=+24.095572418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.413841 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.414245 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.415852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.417028 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.417437 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.417687 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.418059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.419497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.420245 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.421434 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.421939 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.422176 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.422883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.425384 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.426421 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.426825 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.426895 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.426920 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.426922 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.426938 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.427032 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:35.926997546 +0000 UTC m=+24.109890057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427178 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427169 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427325 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427415 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.427685 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.428254 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.428256 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.430398 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.430499 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.430857 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.431190 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.432210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.433009 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.434745 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.435184 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.437022 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.442372 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.442443 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.442465 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.442483 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.442546 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:35.942525967 +0000 UTC m=+24.125418498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.464268 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.465275 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.492254 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.492526 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.492669 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.492748 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.493916 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494018 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494257 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494334 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494405 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494482 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494547 4856 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494614 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494688 4856 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494757 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494839 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.494909 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495006 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495072 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495174 4856 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495253 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495317 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495386 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495454 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495692 4856 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495762 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495851 4856 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.495933 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496107 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496186 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496253 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496318 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496384 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496452 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496532 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496605 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496670 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496741 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496850 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.496928 4856 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497146 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497330 4856 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497424 4856 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497523 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497695 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497793 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497918 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.498016 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.498965 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.498988 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499000 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499014 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499027 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499037 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499051 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499062 4856 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.493093 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.492876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499075 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499252 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499268 4856 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499279 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499291 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499302 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499316 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499328 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499339 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499351 4856 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499362 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499372 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499382 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499393 4856 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499404 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.497165 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499414 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499578 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499600 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499615 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499632 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499648 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499662 4856 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499676 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499691 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499702 4856 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499714 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499727 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499737 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499747 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499757 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499768 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499777 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499789 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499800 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499829 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499839 4856 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499848 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499858 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499868 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499880 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499890 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499901 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499912 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499924 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499936 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499948 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499959 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499970 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499979 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499988 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.499997 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500005 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500014 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500023 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500033 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500042 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500051 4856 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500060 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500071 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500080 4856 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500090 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500098 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500108 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500117 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500127 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500137 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500147 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500156 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500165 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500174 4856 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500183 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500192 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500201 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500216 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500225 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500233 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500243 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.500253 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.501335 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.503733 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.513201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.515765 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.522161 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.523952 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.529940 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.544368 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.546816 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.548310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.548342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.548356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.548423 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:35 crc kubenswrapper[4856]: W1203 09:12:35.549069 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-bffd16794fa74641902896aa4ca1d022c370f862b1f31d0e1560e538cc35f0de WatchSource:0}: Error finding container bffd16794fa74641902896aa4ca1d022c370f862b1f31d0e1560e538cc35f0de: Status 404 returned error can't find the container with id bffd16794fa74641902896aa4ca1d022c370f862b1f31d0e1560e538cc35f0de Dec 03 09:12:35 crc kubenswrapper[4856]: W1203 09:12:35.549735 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e81c244e2ebaf0fca3cb39e33441e48839cab58f752a821933f94b3ffc421a2e WatchSource:0}: Error finding container e81c244e2ebaf0fca3cb39e33441e48839cab58f752a821933f94b3ffc421a2e: Status 404 returned error can't find the container with id e81c244e2ebaf0fca3cb39e33441e48839cab58f752a821933f94b3ffc421a2e Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.553054 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.558707 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.570535 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.588206 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.601368 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.601400 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.602006 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.714094 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.734100 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.748994 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.762172 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.915561 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.915636 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.915685 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.915764 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.915828 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:36.915799957 +0000 UTC m=+25.098692258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.915887 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:12:36.915872539 +0000 UTC m=+25.098764840 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.915939 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: E1203 09:12:35.915960 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:36.915954881 +0000 UTC m=+25.098847182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.949721 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b"} Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.949765 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e81c244e2ebaf0fca3cb39e33441e48839cab58f752a821933f94b3ffc421a2e"} Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.950820 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27"} Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.950846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c59fb1e02cd3ebb350ec3833e476b5cee157a0a6ae1d0e64fb6e55d747c86a6"} Dec 03 09:12:35 crc kubenswrapper[4856]: I1203 09:12:35.951796 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bffd16794fa74641902896aa4ca1d022c370f862b1f31d0e1560e538cc35f0de"} Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.015992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.016070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016216 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016236 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016259 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016284 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016327 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016340 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016309 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:37.0162937 +0000 UTC m=+25.199186001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.016426 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:37.016392763 +0000 UTC m=+25.199285054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.080015 4856 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.083168 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.690370 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.690566 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.690672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.690791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.691006 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:36 crc kubenswrapper[4856]: E1203 09:12:36.691097 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.790961 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.791664 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.793623 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.798411 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.799093 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.806646 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.807000 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.808120 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.808892 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.810343 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.824129 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.824772 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.826289 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.827029 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.827640 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.828719 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.829345 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.830991 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.831469 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.832225 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.836062 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.836745 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.838061 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.838626 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.839482 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.840485 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.841394 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.843130 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.843692 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.844005 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.845249 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.846122 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.846931 4856 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.847474 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.856314 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.857320 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.857973 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.859267 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.860016 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.861422 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.862162 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.863649 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.864461 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.865708 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.866340 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.867307 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.868238 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.868752 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.869456 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.870349 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.871505 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.871997 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.872573 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.873460 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.874011 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.875132 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.875618 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.876549 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.889411 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.901540 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.913519 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.956505 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54"} Dec 03 09:12:36 crc kubenswrapper[4856]: I1203 09:12:36.999843 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:36.999943 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.000102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.000203 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.000277 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:39.000258387 +0000 UTC m=+27.183150688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.000369 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:12:39.000334309 +0000 UTC m=+27.183226610 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.000454 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.000487 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:39.000480753 +0000 UTC m=+27.183373054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.016733 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.037251 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.069774 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.080323 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.101255 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.101348 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101488 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101533 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101547 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101566 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101597 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101603 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:39.101585871 +0000 UTC m=+27.284478172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101619 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:37 crc kubenswrapper[4856]: E1203 09:12:37.101707 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:39.101676974 +0000 UTC m=+27.284569455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.277936 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.390915 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.527353 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.564990 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.944484 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d49r7"] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.944734 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.946212 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.946819 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.947457 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.950113 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gzk5w"] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.950358 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.951196 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-l9h2m"] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.957229 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.958883 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.959004 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.959196 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.959765 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.960272 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.960943 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h2mjf"] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.970558 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.970696 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.970776 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.971008 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.971055 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.971405 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zpk2l"] Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.971576 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.971923 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zpk2l" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.982544 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.982876 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.983010 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.983115 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.987473 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.988684 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.988766 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.988960 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.988716 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 09:12:37 crc kubenswrapper[4856]: I1203 09:12:37.991316 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.003249 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.024832 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.176851 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177210 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3541a85a-a53e-472a-9323-3bdb8c844e1f-rootfs\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177314 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrmt\" (UniqueName: \"kubernetes.io/projected/3541a85a-a53e-472a-9323-3bdb8c844e1f-kube-api-access-nzrmt\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177509 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-multus-certs\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-system-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177765 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.177943 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/be0f114f-fadd-4753-8929-4feed01dcf71-hosts-file\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178225 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178317 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178435 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178482 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-bin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178511 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgsg\" (UniqueName: \"kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178528 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3541a85a-a53e-472a-9323-3bdb8c844e1f-proxy-tls\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178568 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-etc-kubernetes\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178583 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-kubelet\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178649 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178666 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cnibin\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178685 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-multus\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178702 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-system-cni-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-netns\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178734 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3541a85a-a53e-472a-9323-3bdb8c844e1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cni-binary-copy\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178766 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-conf-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178786 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178816 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-daemon-config\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcr6r\" (UniqueName: \"kubernetes.io/projected/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-kube-api-access-xcr6r\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178895 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-k8s-cni-cncf-io\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178915 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178931 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178946 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178962 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178978 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngxv\" (UniqueName: \"kubernetes.io/projected/be0f114f-fadd-4753-8929-4feed01dcf71-kube-api-access-fngxv\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.178993 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cnibin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179033 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-os-release\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179051 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-os-release\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-hostroot\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179082 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179099 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqm8\" (UniqueName: \"kubernetes.io/projected/84cf8e52-cc64-49f6-93d4-6368ec50e14c-kube-api-access-pvqm8\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.179145 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-socket-dir-parent\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.250060 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/be0f114f-fadd-4753-8929-4feed01dcf71-hosts-file\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280158 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-bin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcgsg\" (UniqueName: \"kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280299 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3541a85a-a53e-472a-9323-3bdb8c844e1f-proxy-tls\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280317 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-etc-kubernetes\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-kubelet\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280413 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cnibin\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-multus\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280449 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-netns\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-system-cni-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280509 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280530 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3541a85a-a53e-472a-9323-3bdb8c844e1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cni-binary-copy\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-conf-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280616 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcr6r\" (UniqueName: \"kubernetes.io/projected/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-kube-api-access-xcr6r\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280634 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-daemon-config\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280652 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280722 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-k8s-cni-cncf-io\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngxv\" (UniqueName: \"kubernetes.io/projected/be0f114f-fadd-4753-8929-4feed01dcf71-kube-api-access-fngxv\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280762 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cnibin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280780 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280819 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-os-release\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-os-release\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280869 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-hostroot\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-socket-dir-parent\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqm8\" (UniqueName: \"kubernetes.io/projected/84cf8e52-cc64-49f6-93d4-6368ec50e14c-kube-api-access-pvqm8\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.280982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3541a85a-a53e-472a-9323-3bdb8c844e1f-rootfs\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-multus-certs\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281077 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzrmt\" (UniqueName: \"kubernetes.io/projected/3541a85a-a53e-472a-9323-3bdb8c844e1f-kube-api-access-nzrmt\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281096 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281131 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-system-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-system-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/be0f114f-fadd-4753-8929-4feed01dcf71-hosts-file\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281748 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281778 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-cni-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.281834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.282625 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-binary-copy\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.282678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-bin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.282774 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.282875 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-socket-dir-parent\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.283974 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.284026 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-system-cni-dir\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.284057 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-kubelet\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.283712 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-etc-kubernetes\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3541a85a-a53e-472a-9323-3bdb8c844e1f-proxy-tls\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3541a85a-a53e-472a-9323-3bdb8c844e1f-rootfs\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289736 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.289788 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.290248 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3541a85a-a53e-472a-9323-3bdb8c844e1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.291087 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-daemon-config\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.291139 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.291169 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-multus-certs\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.291340 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.291372 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-var-lib-cni-multus\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292029 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cni-binary-copy\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292057 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-cnibin\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-netns\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292121 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84cf8e52-cc64-49f6-93d4-6368ec50e14c-os-release\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-host-run-k8s-cni-cncf-io\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292505 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-cnibin\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292547 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-hostroot\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-os-release\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.292722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.293853 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-multus-conf-dir\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.294269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.535932 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.545546 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcgsg\" (UniqueName: \"kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg\") pod \"ovnkube-node-h2mjf\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.599473 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqm8\" (UniqueName: \"kubernetes.io/projected/84cf8e52-cc64-49f6-93d4-6368ec50e14c-kube-api-access-pvqm8\") pod \"multus-additional-cni-plugins-l9h2m\" (UID: \"84cf8e52-cc64-49f6-93d4-6368ec50e14c\") " pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.607628 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.616660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzrmt\" (UniqueName: \"kubernetes.io/projected/3541a85a-a53e-472a-9323-3bdb8c844e1f-kube-api-access-nzrmt\") pod \"machine-config-daemon-gzk5w\" (UID: \"3541a85a-a53e-472a-9323-3bdb8c844e1f\") " pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.624875 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcr6r\" (UniqueName: \"kubernetes.io/projected/29870646-4fde-4ebe-a3a9-0ef904f1bbaa-kube-api-access-xcr6r\") pod \"multus-zpk2l\" (UID: \"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\") " pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.670797 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngxv\" (UniqueName: \"kubernetes.io/projected/be0f114f-fadd-4753-8929-4feed01dcf71-kube-api-access-fngxv\") pod \"node-resolver-d49r7\" (UID: \"be0f114f-fadd-4753-8929-4feed01dcf71\") " pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.690347 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:38 crc kubenswrapper[4856]: E1203 09:12:38.690466 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.690523 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:38 crc kubenswrapper[4856]: E1203 09:12:38.690566 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.690606 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:38 crc kubenswrapper[4856]: E1203 09:12:38.690649 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.732609 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:38Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.733095 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.864474 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d49r7" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.878005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.892280 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.900490 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zpk2l" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.976408 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:38Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.978704 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerStarted","Data":"1c1a7e818b1b13cd8e2793aed061db64e741e62ffbbaee3a4ddbc1aca6adfbb8"} Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.983182 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"456a4383a5ce3e76e7e0a9b25d3a52e3a7e25053f11380dee6e30351f5fddd9c"} Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.986115 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d49r7" event={"ID":"be0f114f-fadd-4753-8929-4feed01dcf71","Type":"ContainerStarted","Data":"f87d9298fd119c5e56f8947dcb167a9beed2a82422f7fe229021383150e8818e"} Dec 03 09:12:38 crc kubenswrapper[4856]: I1203 09:12:38.999659 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:38Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.021249 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.042438 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.044766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.044915 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.044963 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:12:43.04493044 +0000 UTC m=+31.227822891 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.045065 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.045167 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:43.045141825 +0000 UTC m=+31.228034126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.045193 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.045299 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.045328 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:43.04532183 +0000 UTC m=+31.228214131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.060856 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.078528 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.094617 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.116194 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.134838 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.146907 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.146970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147116 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147135 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147148 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147199 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:43.147184658 +0000 UTC m=+31.330076959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147562 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147582 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147592 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:39 crc kubenswrapper[4856]: E1203 09:12:39.147619 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:43.147611059 +0000 UTC m=+31.330503360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.156924 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.173529 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.196790 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.224105 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.243684 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.263453 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.284922 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.994396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64"} Dec 03 09:12:39 crc kubenswrapper[4856]: I1203 09:12:39.996044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d49r7" event={"ID":"be0f114f-fadd-4753-8929-4feed01dcf71","Type":"ContainerStarted","Data":"77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.007789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.008092 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.008197 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"b7159a4a16859c540f857a557328ed393065d5e3e5fb9b58ed03c330f9a208f2"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.017154 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9" exitCode=0 Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.017292 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.017328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerStarted","Data":"a2e214173779aae7489ded65214de0088e6ae1c9eab0160837bb4040cb131798"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.019959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerStarted","Data":"40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.021589 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" exitCode=0 Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.021635 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.052698 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.288841 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.332033 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.357629 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.384158 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.410919 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.463677 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.482579 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.505149 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.528468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.550552 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.588056 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.605696 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.623390 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.639050 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.717669 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:40 crc kubenswrapper[4856]: E1203 09:12:40.717821 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.717659 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:40 crc kubenswrapper[4856]: E1203 09:12:40.718218 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.718383 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:40 crc kubenswrapper[4856]: E1203 09:12:40.718530 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.726451 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.751683 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.783583 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.798123 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.813322 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.825255 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.841044 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.861514 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.877581 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.896513 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:40 crc kubenswrapper[4856]: I1203 09:12:40.916390 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:40Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.026597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.026651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.029955 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerStarted","Data":"7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226"} Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.047612 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.058896 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.076581 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.101483 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.122113 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.136643 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.149906 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.171586 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.191977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.204915 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.220051 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.238658 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:41 crc kubenswrapper[4856]: I1203 09:12:41.279467 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.074378 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.074767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.089321 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wfssg"] Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.090003 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.105342 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.133162 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.143746 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgc5\" (UniqueName: \"kubernetes.io/projected/c9a6e0bf-3097-46e3-ad75-087b63c827dc-kube-api-access-wvgc5\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.144092 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9a6e0bf-3097-46e3-ad75-087b63c827dc-serviceca\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.144209 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9a6e0bf-3097-46e3-ad75-087b63c827dc-host\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.143964 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.145746 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.262829 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9a6e0bf-3097-46e3-ad75-087b63c827dc-serviceca\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.262902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9a6e0bf-3097-46e3-ad75-087b63c827dc-host\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.262946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgc5\" (UniqueName: \"kubernetes.io/projected/c9a6e0bf-3097-46e3-ad75-087b63c827dc-kube-api-access-wvgc5\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.264584 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9a6e0bf-3097-46e3-ad75-087b63c827dc-serviceca\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.264660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9a6e0bf-3097-46e3-ad75-087b63c827dc-host\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.312375 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.342555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgc5\" (UniqueName: \"kubernetes.io/projected/c9a6e0bf-3097-46e3-ad75-087b63c827dc-kube-api-access-wvgc5\") pod \"node-ca-wfssg\" (UID: \"c9a6e0bf-3097-46e3-ad75-087b63c827dc\") " pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.391213 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.409103 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wfssg" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.423671 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.441756 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.455716 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.471288 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.484036 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: W1203 09:12:42.495599 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a6e0bf_3097_46e3_ad75_087b63c827dc.slice/crio-11b42074eebca482f4bfef2329b84719c96fd595c944294b3c6ca240916eb0ef WatchSource:0}: Error finding container 11b42074eebca482f4bfef2329b84719c96fd595c944294b3c6ca240916eb0ef: Status 404 returned error can't find the container with id 11b42074eebca482f4bfef2329b84719c96fd595c944294b3c6ca240916eb0ef Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.510228 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.526053 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.541323 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.553947 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.555993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.556025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.556037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.556159 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.560504 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.573386 4856 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.573715 4856 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.575150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.575207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.575218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.575244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.575262 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.584703 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.595917 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.600905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.600975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.600987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.601012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.601024 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.601580 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.613416 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.616318 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.625765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.625824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.625847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.625865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.625876 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.639889 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.644168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.644222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.644236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.644257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.644271 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.660115 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.667146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.667181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.667191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.667209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.667220 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.680311 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.680470 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.683196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.683219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.683227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.683242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.683252 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.688835 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.688990 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.689015 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.689145 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.689460 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:42 crc kubenswrapper[4856]: E1203 09:12:42.689560 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.709617 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.724903 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.739713 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.754372 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.768623 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.793060 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.794411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.794511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.794576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.794638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.794694 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.813685 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.837154 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.849069 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.864031 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.879319 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.883757 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.894072 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.898027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.898059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.898069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.898084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.898095 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:42Z","lastTransitionTime":"2025-12-03T09:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.907670 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.929244 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.945234 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:42 crc kubenswrapper[4856]: I1203 09:12:42.961782 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.000324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.000403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.000414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.000435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.000448 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.034621 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.051468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.071701 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.072742 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.072957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.073108 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:12:51.073002915 +0000 UTC m=+39.255895376 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.073191 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.073310 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:51.073285013 +0000 UTC m=+39.256177314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.073356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.073551 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.073584 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:51.07357539 +0000 UTC m=+39.256467771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.081453 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226" exitCode=0 Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.081531 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.082988 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wfssg" event={"ID":"c9a6e0bf-3097-46e3-ad75-087b63c827dc","Type":"ContainerStarted","Data":"11b42074eebca482f4bfef2329b84719c96fd595c944294b3c6ca240916eb0ef"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.088615 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.090597 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.103313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.103562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.103661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.103831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.103922 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.108400 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.128590 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.143111 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.159171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.174841 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175076 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175115 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175129 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.175281 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175368 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:51.175291665 +0000 UTC m=+39.358183976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175572 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175676 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175756 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:43 crc kubenswrapper[4856]: E1203 09:12:43.175907 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:12:51.17588958 +0000 UTC m=+39.358781881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.176126 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.192014 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.210925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.210975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.210985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.211003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.211016 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.212214 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.232379 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.247044 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.271214 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.291544 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.316048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.316129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.316143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.316167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.316182 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.383557 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.397872 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.415245 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.420336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.420378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.420387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.420402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.420411 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.431445 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.446153 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.458121 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.470977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.482618 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.497799 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.511884 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.524526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.524685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.524822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.524937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.525049 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.527938 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.628155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.628188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.628196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.628236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.628245 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.731056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.731096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.731106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.731126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.731145 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.833497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.833547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.833560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.833577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.833589 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.936308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.936562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.936690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.936849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:43 crc kubenswrapper[4856]: I1203 09:12:43.937016 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:43Z","lastTransitionTime":"2025-12-03T09:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.039657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.040017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.040026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.040040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.040050 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.146843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.146878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.146888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.146905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.146916 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.249467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.249510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.249520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.249539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.249550 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.352944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.352985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.352995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.353012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.353023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.455241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.455269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.455276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.455289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.455298 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.557966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.558006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.558018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.558037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.558050 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.661030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.661079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.661088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.661103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.661112 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.687949 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.688033 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.688031 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:44 crc kubenswrapper[4856]: E1203 09:12:44.688141 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:44 crc kubenswrapper[4856]: E1203 09:12:44.688200 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:44 crc kubenswrapper[4856]: E1203 09:12:44.688282 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.763181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.763224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.763237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.763253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.763264 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.866318 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.866375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.866390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.866415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.866433 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.969069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.969106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.969114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.969141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:44 crc kubenswrapper[4856]: I1203 09:12:44.969151 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:44Z","lastTransitionTime":"2025-12-03T09:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.078734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.078771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.078784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.079133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.079363 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.103038 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.105529 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd" exitCode=0 Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.105579 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.108994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wfssg" event={"ID":"c9a6e0bf-3097-46e3-ad75-087b63c827dc","Type":"ContainerStarted","Data":"6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.121166 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.135279 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.155996 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.182648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.182679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.182689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.182705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.182716 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.280518 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.297090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.297148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.297158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.297178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.297191 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.298579 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.311819 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.329085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.345238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.358689 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.384177 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.400238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.400325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.400336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.400352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.400364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.427348 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.454969 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.473100 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.491059 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.502502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.502538 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.502549 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.502566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.502581 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.509124 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.521846 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.536033 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.554901 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.574737 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.587996 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.604766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.604819 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.604831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.604848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.604859 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.605422 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.625238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.636797 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.650678 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.662655 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.676768 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.688593 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.702431 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:45Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.707113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.707150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.707159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.707180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.707191 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.809548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.809599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.809612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.809629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.809642 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.913058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.913107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.913120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.913140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:45 crc kubenswrapper[4856]: I1203 09:12:45.913155 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:45Z","lastTransitionTime":"2025-12-03T09:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.015783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.015853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.015865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.015882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.015894 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.118308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.118405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.118429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.118457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.118480 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.222221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.222920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.223186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.223266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.223375 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.334136 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.334578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.334660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.334724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.334785 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.436559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.436759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.436846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.436983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.437063 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.539699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.539777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.539790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.539821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.539835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.642178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.642212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.642223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.642239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.642251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.688913 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.688939 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.688949 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:46 crc kubenswrapper[4856]: E1203 09:12:46.689052 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:46 crc kubenswrapper[4856]: E1203 09:12:46.689143 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:46 crc kubenswrapper[4856]: E1203 09:12:46.689222 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.744575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.744636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.744648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.744669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.744683 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.846794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.846840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.846851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.846875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.846887 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.949301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.949350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.949363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.949378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:46 crc kubenswrapper[4856]: I1203 09:12:46.949389 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:46Z","lastTransitionTime":"2025-12-03T09:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.051878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.051959 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.051981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.052008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.052023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.118147 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c" exitCode=0 Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.118203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.123391 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.137578 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.153859 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.154459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.154496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.154509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.154528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.154540 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.170246 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.189405 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.203318 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.217316 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.235161 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.251945 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.260009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.260046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.260058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.260080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.260096 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.265179 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.283759 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.304017 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.319179 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.337553 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.354649 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:47Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.363293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.363357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.363371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.363390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.363710 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.466289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.466636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.466767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.466909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.467012 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.570700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.571155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.571224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.571335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.571405 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.674442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.674841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.674932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.675011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.675083 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.778689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.778967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.779054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.779130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.779189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.887967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.888378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.888391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.888413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.888427 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.992063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.992341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.992637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.992715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:47 crc kubenswrapper[4856]: I1203 09:12:47.992798 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:47Z","lastTransitionTime":"2025-12-03T09:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.096341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.096416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.096431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.096454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.096504 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.131504 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerStarted","Data":"a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.148609 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.163607 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.182026 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.198048 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.200186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.200238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.200249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.200267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.200280 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.214628 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.230449 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.246393 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.267654 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.286637 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.303578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.303636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.303646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.303674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.303685 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.305624 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.332446 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.357163 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.372825 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.384520 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:48Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.406341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.406582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.406645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.406711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.406783 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.509900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.509946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.509957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.509974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.509987 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.612273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.612313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.612329 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.612345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.612356 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.688517 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.688646 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:48 crc kubenswrapper[4856]: E1203 09:12:48.688765 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.688824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:48 crc kubenswrapper[4856]: E1203 09:12:48.688857 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:48 crc kubenswrapper[4856]: E1203 09:12:48.689036 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.714084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.714131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.714146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.714168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.714181 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.816129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.816175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.816187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.816206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.816218 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.918766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.919277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.919405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.919429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:48 crc kubenswrapper[4856]: I1203 09:12:48.919441 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:48Z","lastTransitionTime":"2025-12-03T09:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.023079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.023116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.023127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.023148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.023158 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.125775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.125855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.125865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.125878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.125889 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.141184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.141505 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.141549 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.155947 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.169233 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.187604 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.201722 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.217064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.228339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.228373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.228381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.228395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.228405 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.231557 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.245081 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.257171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.280320 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.330225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.330259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.330271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.330287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.330297 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.396971 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433273 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.433971 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.456229 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.472233 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.489422 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.568231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.568283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.568295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.568317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.568328 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.573362 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.594610 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.613213 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.630264 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.649114 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.672416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.672444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.672455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.672471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.672481 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.673492 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.690299 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.705919 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.725453 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.743940 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.760164 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.780884 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.795078 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.811440 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.824588 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:49Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.892297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.892340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.892352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.892373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.892384 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.995299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.995351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.995362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.995384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:49 crc kubenswrapper[4856]: I1203 09:12:49.995395 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:49Z","lastTransitionTime":"2025-12-03T09:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.099043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.099085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.099100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.102875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.102947 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.145635 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.171783 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.189631 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.207923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.207973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.207933 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.207987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.208195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.208212 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.228480 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.251607 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.269588 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.283844 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.300179 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.310723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.310789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.310798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.310832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.310842 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.313887 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.331778 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.348031 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.366436 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.381520 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.413884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.413944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.413965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.413984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.413994 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.501950 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.517845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.517898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.517910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.517929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.517940 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.519788 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:50Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.746116 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.746171 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:50 crc kubenswrapper[4856]: E1203 09:12:50.746293 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.746328 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:50 crc kubenswrapper[4856]: E1203 09:12:50.746473 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:50 crc kubenswrapper[4856]: E1203 09:12:50.746563 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.747418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.747460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.747473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.747493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.747507 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.850369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.850438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.850451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.850470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.850482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.954417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.954465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.954476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.954491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:50 crc kubenswrapper[4856]: I1203 09:12:50.954503 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:50Z","lastTransitionTime":"2025-12-03T09:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.057344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.057620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.057684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.057751 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.057836 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.151512 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.151702 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:13:07.151663497 +0000 UTC m=+55.334555798 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.151756 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.151879 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.151904 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.151962 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:07.151951754 +0000 UTC m=+55.334844045 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.152018 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.152063 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:07.152055157 +0000 UTC m=+55.334947458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.160288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.160339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.160352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.160370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.160382 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.252736 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.253100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.252947 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253253 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253374 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253200 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253508 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253522 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253608 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:07.253482384 +0000 UTC m=+55.436374685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:51 crc kubenswrapper[4856]: E1203 09:12:51.253686 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:07.253677189 +0000 UTC m=+55.436569490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.262931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.262967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.262980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.262996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.263009 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.366751 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.367005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.367021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.367042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.367057 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.472697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.472758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.472782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.472841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.472867 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.633774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.633842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.633852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.633867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.633876 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.736353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.736403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.736412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.736428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.736437 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.839688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.839943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.840066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.840187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.840260 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.943967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.944216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.944230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.944253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:51 crc kubenswrapper[4856]: I1203 09:12:51.944268 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:51Z","lastTransitionTime":"2025-12-03T09:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.048132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.048204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.048217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.048244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.048258 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.150919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.150965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.150974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.150991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.151003 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.156850 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.156835 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c" exitCode=0 Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.173412 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.190470 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.206314 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.224382 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.245630 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.253216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.253289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.253303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.253326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.253645 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.261082 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.276896 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.290629 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.304992 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.319810 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.333975 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.350209 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.356403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.356446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.356456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.356472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.356482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.364385 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.380465 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.414385 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk"] Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.415171 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.417733 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.417921 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.431524 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.448997 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.459418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.459475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.459487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.459506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.459517 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.466960 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.485021 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.509105 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.525155 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.543676 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.544176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.544235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.544286 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mw2\" (UniqueName: \"kubernetes.io/projected/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-kube-api-access-47mw2\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.544450 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.558737 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.563585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.563645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.563653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.563673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.563685 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.578745 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.596558 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.615357 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.637134 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.645583 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.645622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.645651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mw2\" (UniqueName: \"kubernetes.io/projected/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-kube-api-access-47mw2\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.645701 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.646843 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.647164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.654475 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.671431 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.671632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mw2\" (UniqueName: \"kubernetes.io/projected/c44dc6de-92c1-468c-b5d3-7c04eda8fbf4-kube-api-access-47mw2\") pod \"ovnkube-control-plane-749d76644c-zmrwk\" (UID: \"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.671956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.671985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.671994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.672020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.672031 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.689938 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.690041 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.690119 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.690151 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.690340 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.690640 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.703693 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.725580 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.747075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.747131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.747149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.747170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.747187 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.760204 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.762271 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.766480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.766536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.766548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.766584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.766600 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.776642 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.780053 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.786062 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.797820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.798213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.798300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.798405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.798497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: W1203 09:12:52.799892 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44dc6de_92c1_468c_b5d3_7c04eda8fbf4.slice/crio-c4c6bb20f65a0645294417ba7394a336959c7b7b6a5c4b0e8b73dbd166521dae WatchSource:0}: Error finding container c4c6bb20f65a0645294417ba7394a336959c7b7b6a5c4b0e8b73dbd166521dae: Status 404 returned error can't find the container with id c4c6bb20f65a0645294417ba7394a336959c7b7b6a5c4b0e8b73dbd166521dae Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.816674 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.817871 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.825584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.825625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.825638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.825656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.825668 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.837612 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.849815 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.855773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.855832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.855872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.855897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.855912 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.861112 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.873000 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: E1203 09:12:52.873213 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.880576 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.898085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.913929 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.927127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.935643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.935700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.935714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.935743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.935760 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:52Z","lastTransitionTime":"2025-12-03T09:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.944904 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:52 crc kubenswrapper[4856]: I1203 09:12:52.996071 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.015570 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.039884 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.041601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.041653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.041666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.041689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.041702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.065097 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.079936 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.143957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.144008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.144021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.144041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.144053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.173979 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" event={"ID":"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4","Type":"ContainerStarted","Data":"bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.174041 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" event={"ID":"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4","Type":"ContainerStarted","Data":"c4c6bb20f65a0645294417ba7394a336959c7b7b6a5c4b0e8b73dbd166521dae"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.178580 4856 generic.go:334] "Generic (PLEG): container finished" podID="84cf8e52-cc64-49f6-93d4-6368ec50e14c" containerID="f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707" exitCode=0 Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.178631 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerDied","Data":"f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.219657 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.236731 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.246762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.246813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.246830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.246867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.246884 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.261325 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.276887 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.293060 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.311294 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.348002 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.355953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.355986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.356000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.356018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.356032 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.363323 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.380529 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.397593 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.459090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.459142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.459159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.459191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.459213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.549996 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.563671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.563728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.563740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.563762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.563776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.572622 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.587460 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.612075 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.624127 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cqjvn"] Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.624516 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: E1203 09:12:53.624573 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.633832 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.651118 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.665439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.665483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.665494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.665510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.665520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.668901 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.692886 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.712701 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.729892 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.750205 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.759073 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.759133 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7pl\" (UniqueName: \"kubernetes.io/projected/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-kube-api-access-5q7pl\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.764239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.768121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.768178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.768191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.768211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.768225 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.780984 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.796147 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.814998 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.831754 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.847802 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.860238 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.860317 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q7pl\" (UniqueName: \"kubernetes.io/projected/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-kube-api-access-5q7pl\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: E1203 09:12:53.860501 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:53 crc kubenswrapper[4856]: E1203 09:12:53.860619 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:12:54.360593218 +0000 UTC m=+42.543485519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.864492 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.870130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.870189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.870203 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.870224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.870238 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.879140 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.880342 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q7pl\" (UniqueName: \"kubernetes.io/projected/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-kube-api-access-5q7pl\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.897595 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.915209 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.974769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.974847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.974860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.974884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:53 crc kubenswrapper[4856]: I1203 09:12:53.974912 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:53Z","lastTransitionTime":"2025-12-03T09:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.078531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.078584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.078594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.078615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.078627 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.182714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.183259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.183272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.183291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.183306 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.189897 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" event={"ID":"84cf8e52-cc64-49f6-93d4-6368ec50e14c","Type":"ContainerStarted","Data":"b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.192098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" event={"ID":"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4","Type":"ContainerStarted","Data":"3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.206103 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.222091 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.238310 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.257178 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.279641 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.286436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.286489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.286501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.286518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.286530 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.295429 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.313408 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.329125 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.346527 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.362051 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.366088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:54 crc kubenswrapper[4856]: E1203 09:12:54.366565 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:54 crc kubenswrapper[4856]: E1203 09:12:54.366651 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:12:55.366626484 +0000 UTC m=+43.549518985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.376794 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.389560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.389607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.389621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.389639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.389650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.396171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.411911 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.428666 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.446716 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.462137 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.477622 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.492722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.492777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.492788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.492832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.492845 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.496613 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.520260 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.535623 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.550427 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.567853 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.583244 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.595717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.595776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.595787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.595813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.595827 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.600633 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.614749 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.632085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.647528 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.663284 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.676873 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.688374 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.688424 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:54 crc kubenswrapper[4856]: E1203 09:12:54.688523 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.688552 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:54 crc kubenswrapper[4856]: E1203 09:12:54.688644 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:54 crc kubenswrapper[4856]: E1203 09:12:54.688797 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.694473 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.697741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.697790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.697800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.697823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.697851 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.709199 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.724479 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.801122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.801277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.801363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.801403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.801417 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.904612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.904654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.904663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.904678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:54 crc kubenswrapper[4856]: I1203 09:12:54.904688 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:54Z","lastTransitionTime":"2025-12-03T09:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.007102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.007142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.007151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.007164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.007175 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.109505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.109557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.109616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.109632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.109643 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.197415 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/0.log" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.200313 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885" exitCode=1 Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.201157 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.201979 4856 scope.go:117] "RemoveContainer" containerID="d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.212439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.212485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.212494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.212514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.212527 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.221417 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.234468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.251270 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.273960 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"message\\\":\\\"vent handler 2 for removal\\\\nI1203 09:12:54.288854 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 09:12:54.288871 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 09:12:54.288879 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 09:12:54.288916 6068 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.288980 6068 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.288776 6068 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.289271 6068 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.290982 6068 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 09:12:54.291066 6068 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 09:12:54.291126 6068 factory.go:656] Stopping watch factory\\\\nI1203 09:12:54.291163 6068 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:54.291117 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.290849 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.304896 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.315180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.315225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.315234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.315251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.315264 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.322529 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.336385 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.354489 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.372806 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.377982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:55 crc kubenswrapper[4856]: E1203 09:12:55.379557 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:55 crc kubenswrapper[4856]: E1203 09:12:55.379636 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:12:57.379612538 +0000 UTC m=+45.562505049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.390041 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.406110 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.426524 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.426605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.426621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.426646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.426663 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.429986 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.445566 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.459409 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.474291 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.530570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.530630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.530701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.530730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.530764 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.633298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.633340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.633351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.633371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.633398 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.687927 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:55 crc kubenswrapper[4856]: E1203 09:12:55.688048 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.736196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.736229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.736237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.736251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.736260 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.838688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.838950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.839041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.839164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.839234 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.972636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.972695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.972706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.972726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:55 crc kubenswrapper[4856]: I1203 09:12:55.972736 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:55Z","lastTransitionTime":"2025-12-03T09:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.076178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.076228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.076240 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.076259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.076271 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.179728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.179774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.179785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.179801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.179828 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.205715 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/0.log" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.208793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.209395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.225692 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.241376 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.254135 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.269472 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.282564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.282600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.282609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.282623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.282632 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.290812 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"message\\\":\\\"vent handler 2 for removal\\\\nI1203 09:12:54.288854 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 09:12:54.288871 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 09:12:54.288879 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 09:12:54.288916 6068 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.288980 6068 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.288776 6068 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.289271 6068 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.290982 6068 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 09:12:54.291066 6068 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 09:12:54.291126 6068 factory.go:656] Stopping watch factory\\\\nI1203 09:12:54.291163 6068 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:54.291117 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.306264 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.329717 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.344316 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.357933 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.371672 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.384951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.385003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.385018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.385040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.385055 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.390412 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.406292 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.421275 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.437176 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.450172 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.463421 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.488274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.488347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.488362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.488390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.488408 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.591865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.591904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.591914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.591926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.591935 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.688170 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.688224 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:56 crc kubenswrapper[4856]: E1203 09:12:56.688433 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.688237 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:56 crc kubenswrapper[4856]: E1203 09:12:56.688517 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:56 crc kubenswrapper[4856]: E1203 09:12:56.688663 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.694718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.694782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.694808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.694872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.694897 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.798128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.798185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.798196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.798212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.798223 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.900813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.900896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.900908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.900925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:56 crc kubenswrapper[4856]: I1203 09:12:56.900936 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:56Z","lastTransitionTime":"2025-12-03T09:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.003744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.003850 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.003863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.003886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.003899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.106679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.106729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.106742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.106762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.106774 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.210672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.210715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.210728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.210746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.210759 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.214782 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/1.log" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.215564 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/0.log" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.218514 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab" exitCode=1 Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.218561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.218628 4856 scope.go:117] "RemoveContainer" containerID="d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.219491 4856 scope.go:117] "RemoveContainer" containerID="bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab" Dec 03 09:12:57 crc kubenswrapper[4856]: E1203 09:12:57.219715 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.234692 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.248865 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.265061 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.277656 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.293260 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.307198 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.313550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.313599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.313607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.313622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.313634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.324638 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.340419 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.356672 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.373951 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.389684 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.401332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:57 crc kubenswrapper[4856]: E1203 09:12:57.401560 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:57 crc kubenswrapper[4856]: E1203 09:12:57.401650 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:13:01.401626507 +0000 UTC m=+49.584518808 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.403471 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.417018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.417074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.417085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.417107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.417123 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.421960 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.436111 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.455323 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.478189 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8b60308e2cf0fe8975a41dedd5c6f0c11eaf6d1205cf2aac24ddc8677d82885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"message\\\":\\\"vent handler 2 for removal\\\\nI1203 09:12:54.288854 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 09:12:54.288871 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 09:12:54.288879 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 09:12:54.288916 6068 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.288980 6068 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.288776 6068 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:54.289271 6068 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 09:12:54.290982 6068 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 09:12:54.291066 6068 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 09:12:54.291126 6068 factory.go:656] Stopping watch factory\\\\nI1203 09:12:54.291163 6068 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:54.291117 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:57Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.520425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.520490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.520503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.520552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.520569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.623646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.624240 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.624252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.624275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.624290 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.688934 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:57 crc kubenswrapper[4856]: E1203 09:12:57.689104 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.727340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.727395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.727407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.727424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.727436 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.830292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.830337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.830349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.830370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.830384 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.932618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.932674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.932694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.932711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:57 crc kubenswrapper[4856]: I1203 09:12:57.932724 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:57Z","lastTransitionTime":"2025-12-03T09:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.035015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.035053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.035063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.035080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.035093 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.138168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.138221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.138234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.138253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.138266 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.227523 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/1.log" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.233979 4856 scope.go:117] "RemoveContainer" containerID="bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab" Dec 03 09:12:58 crc kubenswrapper[4856]: E1203 09:12:58.234177 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.240192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.240245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.240257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.240271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.240282 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.250799 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.268491 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.284945 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.313434 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.343405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.343453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.343467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.343484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.343497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.347700 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.367825 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.385348 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.403794 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.421019 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.434686 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.446435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.446484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.446494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.446511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.446523 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.454129 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.470854 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.487336 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.501124 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.515858 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.531454 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:12:58Z is after 2025-08-24T17:21:41Z" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.550530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.550602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.550614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.550637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.550649 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.653528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.653582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.653597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.653618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.653632 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.688373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.688536 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:12:58 crc kubenswrapper[4856]: E1203 09:12:58.688621 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.688412 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:12:58 crc kubenswrapper[4856]: E1203 09:12:58.688725 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:12:58 crc kubenswrapper[4856]: E1203 09:12:58.688976 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.755954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.755996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.756009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.756025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.756036 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.859211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.859302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.859323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.859353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.859378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.963857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.963921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.963935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.963962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:58 crc kubenswrapper[4856]: I1203 09:12:58.963980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:58Z","lastTransitionTime":"2025-12-03T09:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.067038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.067093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.067103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.067123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.067135 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.170337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.170386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.170398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.170422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.170437 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.273699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.273784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.273800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.273868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.273887 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.377763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.377844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.377860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.377882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.377895 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.480953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.481038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.481080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.481099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.481113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.584292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.584669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.584747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.584841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.584904 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.687708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.687779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.687792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.687836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.687853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.688488 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:12:59 crc kubenswrapper[4856]: E1203 09:12:59.688682 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.790900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.791698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.791786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.791907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.791997 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.895002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.895073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.895084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.895105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.895118 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.997946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.997995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.998003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.998020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:12:59 crc kubenswrapper[4856]: I1203 09:12:59.998029 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:12:59Z","lastTransitionTime":"2025-12-03T09:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.100107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.100180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.100193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.100208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.100217 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.203585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.203644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.203656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.203684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.203697 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.306989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.307049 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.307059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.307102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.307117 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.410838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.410933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.410946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.410971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.410984 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.514171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.514219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.514228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.514245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.514256 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.616705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.616749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.616762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.616779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.616791 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.688136 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.688205 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.688258 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:00 crc kubenswrapper[4856]: E1203 09:13:00.688391 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:00 crc kubenswrapper[4856]: E1203 09:13:00.688523 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:00 crc kubenswrapper[4856]: E1203 09:13:00.688567 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.719307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.719364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.719374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.719391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.719404 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.822054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.822100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.822111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.822127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.822138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.925327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.925372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.925384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.925401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:00 crc kubenswrapper[4856]: I1203 09:13:00.925413 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:00Z","lastTransitionTime":"2025-12-03T09:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.028761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.028825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.028841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.028857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.028872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.132071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.132123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.132134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.132153 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.132164 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.235157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.235195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.235207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.235220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.235229 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.337922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.337983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.337995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.338011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.338022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.440908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.440963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.440971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.440993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.441006 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.441776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:01 crc kubenswrapper[4856]: E1203 09:13:01.441923 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:01 crc kubenswrapper[4856]: E1203 09:13:01.441989 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:13:09.441973148 +0000 UTC m=+57.624865449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.543572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.543624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.543637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.543654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.543665 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.646618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.646659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.646670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.646699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.646722 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.688902 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:01 crc kubenswrapper[4856]: E1203 09:13:01.689068 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.749017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.749057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.749065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.749078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.749087 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.852431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.852471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.852478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.852492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.852500 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.955338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.955381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.955394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.955412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:01 crc kubenswrapper[4856]: I1203 09:13:01.955423 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:01Z","lastTransitionTime":"2025-12-03T09:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.058167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.058241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.058255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.058279 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.058293 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.160673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.160735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.160746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.160768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.160779 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.263591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.263636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.263647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.263664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.263676 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.367162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.367441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.367560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.367679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.367771 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.471605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.471661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.471671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.471691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.471701 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.574199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.574395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.574419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.574439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.574452 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.678430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.678489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.678502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.678522 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.678534 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.688908 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.688938 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.689043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:02 crc kubenswrapper[4856]: E1203 09:13:02.689078 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:02 crc kubenswrapper[4856]: E1203 09:13:02.689244 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:02 crc kubenswrapper[4856]: E1203 09:13:02.689484 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.704682 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.720585 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.737623 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.749700 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.768963 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.781687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.781728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.781738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.781757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.781767 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.788683 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.805795 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.819559 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.840226 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.854249 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.870530 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.884751 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.884792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.884822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.884840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.884854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.886619 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.902888 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.918248 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.934856 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.949077 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.987209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.987257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.987271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.987287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:02 crc kubenswrapper[4856]: I1203 09:13:02.987299 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:02Z","lastTransitionTime":"2025-12-03T09:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.090662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.090970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.091078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.091149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.091219 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.119369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.119665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.119762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.119852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.119944 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.133616 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.137747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.137900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.137999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.138084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.138153 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.152755 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.157692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.157719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.157727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.157741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.157749 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.170027 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.175220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.175256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.175268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.175286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.175296 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.188070 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.192419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.192477 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.192489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.192511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.192522 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.208218 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.208393 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.210472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.210516 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.210526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.210543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.210554 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.313488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.313540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.313549 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.313570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.313584 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.416094 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.416133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.416141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.416157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.416167 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.519784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.519881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.519903 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.519926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.519938 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.621926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.621973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.621988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.622005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.622015 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.688678 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:03 crc kubenswrapper[4856]: E1203 09:13:03.688830 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.724146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.724186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.724199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.724215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.724228 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.826936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.826988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.826998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.827014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.827023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.928939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.928983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.928994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.929022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:03 crc kubenswrapper[4856]: I1203 09:13:03.929035 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:03Z","lastTransitionTime":"2025-12-03T09:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.031333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.031374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.031391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.031406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.031415 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.134725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.135010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.135103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.135221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.135309 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.237944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.238009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.238025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.238053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.238066 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.340284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.340324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.340337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.340353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.340364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.443506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.443584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.443598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.443622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.443638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.550207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.551054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.551081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.551105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.551120 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.654076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.654140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.654154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.654185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.654199 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.688531 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.688590 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:04 crc kubenswrapper[4856]: E1203 09:13:04.688675 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.688751 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:04 crc kubenswrapper[4856]: E1203 09:13:04.688988 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:04 crc kubenswrapper[4856]: E1203 09:13:04.689137 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.757310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.757392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.757403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.757420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.757435 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.860609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.860663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.860672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.860691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.860702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.964772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.964852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.964867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.964891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:04 crc kubenswrapper[4856]: I1203 09:13:04.964905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:04Z","lastTransitionTime":"2025-12-03T09:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.069151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.069205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.069217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.069236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.069248 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.171928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.171981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.171990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.172004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.172014 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.274373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.274410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.274419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.274435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.274446 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.376841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.376887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.376897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.376914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.376927 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.479406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.479440 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.479448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.479461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.479470 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.582374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.582437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.582457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.582483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.582502 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.685267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.685320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.685333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.685351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.685364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.687993 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:05 crc kubenswrapper[4856]: E1203 09:13:05.688110 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.787319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.787358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.787368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.787380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.787392 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.890355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.890397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.890408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.890423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.890433 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.992547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.992611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.992622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.992640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:05 crc kubenswrapper[4856]: I1203 09:13:05.992650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:05Z","lastTransitionTime":"2025-12-03T09:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.095462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.095522 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.095536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.095562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.095618 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.197669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.197747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.197760 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.197782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.197798 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.300344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.300382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.300441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.300456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.300464 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.402977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.403029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.403039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.403059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.403069 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.505873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.505943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.505952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.505973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.505985 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.608610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.608657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.608673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.608690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.608702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.688035 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.688065 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:06 crc kubenswrapper[4856]: E1203 09:13:06.688172 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.688202 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:06 crc kubenswrapper[4856]: E1203 09:13:06.688280 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:06 crc kubenswrapper[4856]: E1203 09:13:06.688434 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.711409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.711471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.711481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.711508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.711520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.815020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.815087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.815098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.815120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.815133 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.917243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.917290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.917304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.917324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:06 crc kubenswrapper[4856]: I1203 09:13:06.917335 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:06Z","lastTransitionTime":"2025-12-03T09:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.021142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.021195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.021206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.021227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.021240 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.126316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.126389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.126401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.126433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.126453 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.206788 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.206984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.207073 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.207104 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:13:39.207060608 +0000 UTC m=+87.389952909 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.207163 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:39.207153201 +0000 UTC m=+87.390045502 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.207233 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.207367 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.207410 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:39.207391937 +0000 UTC m=+87.390284238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.229556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.229613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.229624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.229642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.229654 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.308092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.308250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308317 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308376 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308394 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308422 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308449 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308462 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308481 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:39.308457364 +0000 UTC m=+87.491349825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.308532 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:13:39.308507156 +0000 UTC m=+87.491399627 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.332320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.332395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.332409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.332434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.332452 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.435517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.435573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.435586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.435606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.435622 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.538495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.538529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.538539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.538557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.538574 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.641426 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.641482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.641493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.641514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.641527 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.688936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:07 crc kubenswrapper[4856]: E1203 09:13:07.689114 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.744581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.744633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.744644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.744662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.744676 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.848768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.848852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.848866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.848887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.848902 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.952000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.952076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.952111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.952134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:07 crc kubenswrapper[4856]: I1203 09:13:07.952145 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:07Z","lastTransitionTime":"2025-12-03T09:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.054666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.054729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.054742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.054761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.054776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.157445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.157488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.157502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.157518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.157531 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.259620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.259669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.259684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.259701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.259713 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.362306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.362354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.362368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.362381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.362392 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.465300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.465376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.465390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.465410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.465423 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.567937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.567984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.567996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.568013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.568022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.670922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.670987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.670998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.671023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.671036 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.688492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:08 crc kubenswrapper[4856]: E1203 09:13:08.688691 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.688753 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.688851 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:08 crc kubenswrapper[4856]: E1203 09:13:08.689029 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:08 crc kubenswrapper[4856]: E1203 09:13:08.689219 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.774587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.774660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.774670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.774688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.774700 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.877195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.877242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.877253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.877268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.877278 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.979649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.979704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.979718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.979739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:08 crc kubenswrapper[4856]: I1203 09:13:08.979754 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:08Z","lastTransitionTime":"2025-12-03T09:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.082306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.082353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.082366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.082386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.082397 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.185330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.185380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.185390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.185407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.185418 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.288625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.288679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.288691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.288713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.288793 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.391517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.391566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.391576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.391595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.391607 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.493932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.493977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.493989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.494007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.494022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.533957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:09 crc kubenswrapper[4856]: E1203 09:13:09.534091 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:09 crc kubenswrapper[4856]: E1203 09:13:09.534171 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:13:25.534150749 +0000 UTC m=+73.717043050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.596919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.597002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.597016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.597043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.597059 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.689031 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:09 crc kubenswrapper[4856]: E1203 09:13:09.689281 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.700156 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.700200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.700213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.700232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.700244 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.728522 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.741083 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.742326 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.759747 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.775772 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.789598 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.802651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.802699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.802710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.802727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.802740 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.806210 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.822060 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.840971 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.855244 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.869514 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.882335 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.896890 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.905391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.905446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.905460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.905482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.905497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:09Z","lastTransitionTime":"2025-12-03T09:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.910768 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.922857 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.938523 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.960726 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:09 crc kubenswrapper[4856]: I1203 09:13:09.975015 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:09Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.007920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.007969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.007981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.007999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.008011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.110461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.110507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.110518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.110535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.110546 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.213544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.213589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.213598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.213613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.213622 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.317342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.317419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.317461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.317482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.317495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.420467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.420526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.420537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.420560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.420573 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.523374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.523429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.523441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.523461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.523476 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.626027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.626077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.626085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.626100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.626115 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.688027 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.688025 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.688043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:10 crc kubenswrapper[4856]: E1203 09:13:10.688304 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:10 crc kubenswrapper[4856]: E1203 09:13:10.688172 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:10 crc kubenswrapper[4856]: E1203 09:13:10.688479 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.733223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.733881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.734027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.734068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.734086 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.836131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.836164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.836175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.836199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.836210 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.938892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.938941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.938951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.938967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:10 crc kubenswrapper[4856]: I1203 09:13:10.938980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:10Z","lastTransitionTime":"2025-12-03T09:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.042079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.042153 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.042176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.042205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.042227 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.145242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.145308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.145320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.145340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.145352 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.247905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.247954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.247965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.247982 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.247995 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.351106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.351159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.351171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.351190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.351202 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.453452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.453494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.453509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.453523 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.453532 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.555720 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.555774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.555785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.555816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.555829 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.658382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.658435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.658447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.658466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.658482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.688885 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:11 crc kubenswrapper[4856]: E1203 09:13:11.689155 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.690332 4856 scope.go:117] "RemoveContainer" containerID="bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.761132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.761742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.761757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.761847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.761866 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.866200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.866265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.866277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.866299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.866312 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.969584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.969650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.969663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.969688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:11 crc kubenswrapper[4856]: I1203 09:13:11.969702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:11Z","lastTransitionTime":"2025-12-03T09:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.072376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.072421 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.072432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.072452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.072466 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.175209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.175272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.175283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.175304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.175317 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.278699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.278759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.278772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.278793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.278840 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.286102 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/1.log" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.289389 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.289991 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.314831 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.329774 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.343948 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.355366 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.371077 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.381670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.381709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.381720 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.381736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.381746 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.388792 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.402907 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.417311 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.429170 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.443317 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.457123 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.472599 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.484927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.484980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.485003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.485024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.485034 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.485293 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.499877 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.515392 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.535327 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.550771 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.587121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.587185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.587196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.587218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.587231 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.688079 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.688167 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:12 crc kubenswrapper[4856]: E1203 09:13:12.688241 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.688341 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:12 crc kubenswrapper[4856]: E1203 09:13:12.688438 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:12 crc kubenswrapper[4856]: E1203 09:13:12.688330 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.690208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.690244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.690258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.690274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.690286 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.707905 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.721085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.735080 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.749595 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.765851 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.779614 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.792674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.792719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.792728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.792743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.792754 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.795425 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.811089 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.836924 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.850914 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.869776 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.893878 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.895577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.895622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.895632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.895648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.895660 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.908139 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.921700 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.937169 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.948877 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.962575 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:12Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.998659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.998746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.998764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.998785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:12 crc kubenswrapper[4856]: I1203 09:13:12.998835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:12Z","lastTransitionTime":"2025-12-03T09:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.103068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.103138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.103151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.103171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.103184 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.205922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.205975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.205992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.206013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.206029 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.294680 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/2.log" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.295203 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/1.log" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.297342 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" exitCode=1 Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.297392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.297441 4856 scope.go:117] "RemoveContainer" containerID="bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.298267 4856 scope.go:117] "RemoveContainer" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.298443 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.309142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.309186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.309201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.309228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.309240 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.312484 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.326303 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.338483 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.353200 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.371740 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.385697 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398886 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.398784 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.411187 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.412745 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.414355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.414393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.414406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.414423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.414434 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.425255 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.429082 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.432731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.432900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.432963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.433024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.433080 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.436210 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.446829 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.451571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.451609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.451620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.451638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.451651 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.452662 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.468134 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.469907 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.472945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.472989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.473003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.473021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.473034 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.482881 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.485373 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.485689 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.487454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.487495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.487507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.487524 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.487537 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.493171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.507566 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.527163 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6556330206ab1e618cc2e19b86bcfb6dfce1dff5f379f50dced83049bb7cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:12:56Z\\\",\\\"message\\\":\\\"a1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091195 6293 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.091561 6293 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 09:12:56.091676 6293 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 09:12:56.092089 6293 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 09:12:56.092117 6293 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 09:12:56.092143 6293 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 09:12:56.092163 6293 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 09:12:56.092202 6293 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 09:12:56.092208 6293 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 09:12:56.092227 6293 factory.go:656] Stopping watch factory\\\\nI1203 09:12:56.092242 6293 ovnkube.go:599] Stopped ovnkube\\\\nI1203 09:12:56.092250 6293 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.541695 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:13Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.590355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.590405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.590415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.590432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.590442 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.688661 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:13 crc kubenswrapper[4856]: E1203 09:13:13.688876 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.693253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.693288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.693298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.693309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.693318 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.796627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.796677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.796689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.796711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.796723 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.899750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.900167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.900182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.900201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:13 crc kubenswrapper[4856]: I1203 09:13:13.900214 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:13Z","lastTransitionTime":"2025-12-03T09:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.005285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.005328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.005338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.005353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.005363 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.108665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.108710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.108719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.108736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.108745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.211041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.211073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.211085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.211101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.211111 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.302943 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/2.log" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.306243 4856 scope.go:117] "RemoveContainer" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" Dec 03 09:13:14 crc kubenswrapper[4856]: E1203 09:13:14.306406 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.313235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.313451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.313470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.313487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.313500 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.319082 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.333618 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.347710 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.360278 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.374905 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.386644 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.397975 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.413780 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.416514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.416632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.416700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.416769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.416868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.431955 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.446061 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.464409 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.486163 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.500354 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.514041 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.519844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.519892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.519901 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.519916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.519926 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.527100 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.540874 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.553046 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:14Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.623466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.623509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.623521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.623541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.623553 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.689582 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.689739 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:14 crc kubenswrapper[4856]: E1203 09:13:14.689846 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:14 crc kubenswrapper[4856]: E1203 09:13:14.689926 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.690257 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:14 crc kubenswrapper[4856]: E1203 09:13:14.690449 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.725728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.725768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.725779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.725795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.725859 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.828268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.828314 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.828325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.828343 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.828354 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.930913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.931222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.931368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.931513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:14 crc kubenswrapper[4856]: I1203 09:13:14.931661 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:14Z","lastTransitionTime":"2025-12-03T09:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.035188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.035238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.035249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.035272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.035284 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.138721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.138780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.138795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.138842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.138860 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.242252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.242310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.242320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.242342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.242357 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.344870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.344935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.344954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.344981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.344999 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.447979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.448015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.448024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.448038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.448047 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.551204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.551270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.551286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.551313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.551331 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.654049 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.654129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.654140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.654160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.654174 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.688694 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:15 crc kubenswrapper[4856]: E1203 09:13:15.688852 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.756579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.756638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.756651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.756672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.756683 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.858928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.858992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.859004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.859027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.859041 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.961424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.961462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.961471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.961485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:15 crc kubenswrapper[4856]: I1203 09:13:15.961495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:15Z","lastTransitionTime":"2025-12-03T09:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.064384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.064427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.064437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.064455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.064467 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.167415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.167478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.167490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.167512 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.167532 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.270673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.270734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.270745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.270765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.270777 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.373727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.373779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.373790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.373833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.373854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.476998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.477068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.477083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.477110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.477125 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.579729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.579836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.579854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.579885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.579899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.682429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.682479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.682492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.682509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.682522 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.689027 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.689049 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:16 crc kubenswrapper[4856]: E1203 09:13:16.689136 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.689152 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:16 crc kubenswrapper[4856]: E1203 09:13:16.689301 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:16 crc kubenswrapper[4856]: E1203 09:13:16.689396 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.785649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.785711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.785721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.785737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.785752 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.888837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.888897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.888912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.888944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.888991 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.991682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.991728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.991759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.991777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:16 crc kubenswrapper[4856]: I1203 09:13:16.991788 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:16Z","lastTransitionTime":"2025-12-03T09:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.094835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.094902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.094912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.094930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.094941 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.197876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.197941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.197952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.197974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.197986 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.301531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.301598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.301612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.301636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.301652 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.405230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.405295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.405312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.405335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.405356 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.508711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.508749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.508758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.508775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.508785 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.612454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.612538 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.612556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.612586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.612600 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.688729 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:17 crc kubenswrapper[4856]: E1203 09:13:17.688985 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.715896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.715954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.715971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.715994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.716008 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.818758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.818803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.818834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.818858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.818871 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.922060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.922119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.922135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.922157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:17 crc kubenswrapper[4856]: I1203 09:13:17.922175 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:17Z","lastTransitionTime":"2025-12-03T09:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.024841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.024886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.024897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.024915 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.024928 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.128423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.128474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.128491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.128513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.128531 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.231756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.231827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.231842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.231858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.231868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.334647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.334696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.334708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.334724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.334739 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.436633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.436680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.436696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.436719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.436735 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.539423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.539464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.539475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.539490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.539502 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.641570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.641613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.641623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.641655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.641667 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.688607 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.688747 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:18 crc kubenswrapper[4856]: E1203 09:13:18.688794 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.688909 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:18 crc kubenswrapper[4856]: E1203 09:13:18.688964 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:18 crc kubenswrapper[4856]: E1203 09:13:18.689158 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.744704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.744771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.744789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.744837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.744849 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.848003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.848063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.848076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.848099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.848113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.950870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.950935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.950950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.950968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:18 crc kubenswrapper[4856]: I1203 09:13:18.950980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:18Z","lastTransitionTime":"2025-12-03T09:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.053848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.053893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.053903 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.053921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.053934 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.157191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.157239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.157251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.157271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.157286 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.260518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.260584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.260600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.260628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.260642 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.364973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.365051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.365063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.365085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.365097 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.467603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.467669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.467680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.467697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.467709 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.570690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.570749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.570761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.570787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.570825 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.673606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.673641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.673650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.673663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.673674 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.688615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:19 crc kubenswrapper[4856]: E1203 09:13:19.688747 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.776104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.776157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.776165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.776180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.776192 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.879548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.879614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.879628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.879648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.879662 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.982741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.982966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.982988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.983006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:19 crc kubenswrapper[4856]: I1203 09:13:19.983023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:19Z","lastTransitionTime":"2025-12-03T09:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.085517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.085568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.085582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.085598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.085611 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.187645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.187688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.187700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.187715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.187724 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.290524 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.290629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.290641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.290678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.290694 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.393895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.393938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.393949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.393965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.393980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.496677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.496732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.496743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.496761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.496770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.599225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.599281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.599293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.599308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.599320 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.688070 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.688106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.688125 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:20 crc kubenswrapper[4856]: E1203 09:13:20.688215 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:20 crc kubenswrapper[4856]: E1203 09:13:20.688290 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:20 crc kubenswrapper[4856]: E1203 09:13:20.688555 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.701773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.701824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.701836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.701852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.701862 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.804151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.804192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.804200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.804215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.804226 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.907697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.907796 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.907847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.907872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:20 crc kubenswrapper[4856]: I1203 09:13:20.907886 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:20Z","lastTransitionTime":"2025-12-03T09:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.011407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.011447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.011457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.011481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.011497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.114682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.114731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.114746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.114766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.114777 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.217711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.217755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.217766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.217783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.217794 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.319963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.319997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.320004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.320018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.320027 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.423129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.423171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.423184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.423201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.423213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.527175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.527269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.527281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.527300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.527312 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.631067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.631118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.631131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.631147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.631160 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.689057 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:21 crc kubenswrapper[4856]: E1203 09:13:21.689286 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.734154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.734204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.734215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.734232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.734245 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.838479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.838516 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.838530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.838548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.838560 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.942482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.942536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.942548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.942570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:21 crc kubenswrapper[4856]: I1203 09:13:21.942583 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:21Z","lastTransitionTime":"2025-12-03T09:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.045342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.045393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.045401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.045420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.045431 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.149475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.149519 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.149529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.149548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.149558 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.252048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.252101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.252116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.252135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.252151 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.354836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.354909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.354922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.354961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.354976 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.458587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.458640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.458655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.458677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.458692 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.562724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.562784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.562796 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.562839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.562854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.665664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.665721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.665738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.665758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.665772 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.688454 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.688492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:22 crc kubenswrapper[4856]: E1203 09:13:22.688615 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.688644 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:22 crc kubenswrapper[4856]: E1203 09:13:22.688751 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:22 crc kubenswrapper[4856]: E1203 09:13:22.688938 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.706844 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.723854 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.741217 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.752923 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.768599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.768651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.768668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.768690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.768707 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.770455 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.793192 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.807075 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.819522 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.839251 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.852350 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.867354 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.871821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.871862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.871874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.871889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.871899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.880391 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.895234 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.912325 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.927697 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.945985 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.961454 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:22Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.974366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.974416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.974427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.974442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:22 crc kubenswrapper[4856]: I1203 09:13:22.974451 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:22Z","lastTransitionTime":"2025-12-03T09:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.078195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.078236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.078248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.078266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.078279 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.180448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.180511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.180521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.180537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.180547 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.283212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.283262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.283272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.283290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.283302 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.385727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.385785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.385816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.385836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.385850 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.488200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.488271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.488283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.488299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.488344 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.591403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.591459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.591481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.591504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.591517 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.673779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.673878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.673893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.673922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.673937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.688697 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.688927 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.689466 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:23Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.695230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.695279 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.695291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.695313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.695325 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.709532 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:23Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.715264 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.715327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.715338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.715361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.715376 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.728526 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:23Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.733441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.733494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.733505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.733525 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.733535 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.746948 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:23Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.750900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.750973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.750985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.751006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.751018 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.766160 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:23Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:23 crc kubenswrapper[4856]: E1203 09:13:23.766305 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.768573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.768619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.768633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.768654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.768669 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.871752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.871792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.871801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.871837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.871851 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.974178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.974219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.974231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.974250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:23 crc kubenswrapper[4856]: I1203 09:13:23.974265 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:23Z","lastTransitionTime":"2025-12-03T09:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.077003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.077050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.077061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.077079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.077091 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.179965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.180010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.180022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.180038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.180049 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.282985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.283056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.283075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.283099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.283113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.385746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.385786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.385798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.385835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.385847 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.488675 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.488727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.488739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.488756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.488767 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.591851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.593172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.593415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.593449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.593462 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.688875 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.688918 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.688878 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:24 crc kubenswrapper[4856]: E1203 09:13:24.688992 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:24 crc kubenswrapper[4856]: E1203 09:13:24.690142 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.690391 4856 scope.go:117] "RemoveContainer" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" Dec 03 09:13:24 crc kubenswrapper[4856]: E1203 09:13:24.690593 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:24 crc kubenswrapper[4856]: E1203 09:13:24.692092 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.696529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.696586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.696604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.696662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.696677 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.799651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.799716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.799727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.799744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.799775 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.903024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.903069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.903079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.903099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:24 crc kubenswrapper[4856]: I1203 09:13:24.903137 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:24Z","lastTransitionTime":"2025-12-03T09:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.006144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.006209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.006221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.006242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.006255 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.108710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.108767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.108780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.108799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.108839 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.212099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.212173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.212188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.212213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.212225 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.315004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.315041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.315050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.315066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.315079 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.417562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.417607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.417616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.417633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.417643 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.520055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.520093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.520104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.520124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.520139 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.615953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:25 crc kubenswrapper[4856]: E1203 09:13:25.616114 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:25 crc kubenswrapper[4856]: E1203 09:13:25.616169 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:13:57.616154548 +0000 UTC m=+105.799046849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.622990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.623035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.623045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.623060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.623069 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.688222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:25 crc kubenswrapper[4856]: E1203 09:13:25.688388 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.726117 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.726180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.726197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.726220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.726237 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.829423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.829510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.829528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.829556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.829574 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.932280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.932351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.932369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.932397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:25 crc kubenswrapper[4856]: I1203 09:13:25.932414 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:25Z","lastTransitionTime":"2025-12-03T09:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.035528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.035567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.035578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.035593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.035604 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.138399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.138463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.138482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.138507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.138528 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.241454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.241521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.241531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.241553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.241569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.345396 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.345482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.345497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.345559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.345577 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.448100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.448233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.448249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.448266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.448276 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.552430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.552525 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.552548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.552619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.552644 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.655644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.655703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.655714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.655738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.655755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.688969 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:26 crc kubenswrapper[4856]: E1203 09:13:26.689104 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.688984 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.689113 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:26 crc kubenswrapper[4856]: E1203 09:13:26.689263 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:26 crc kubenswrapper[4856]: E1203 09:13:26.689182 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.758973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.759065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.759080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.759107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.759124 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.864600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.864671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.864683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.864707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.864721 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.967566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.967650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.967662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.967679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:26 crc kubenswrapper[4856]: I1203 09:13:26.967714 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:26Z","lastTransitionTime":"2025-12-03T09:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.071574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.071674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.071689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.071714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.071729 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.174087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.174145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.174155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.174171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.174182 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.276904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.277004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.277020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.277045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.277062 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.379877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.379944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.379960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.379983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.379995 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.482878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.482923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.482932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.482957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.482969 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.585742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.585824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.585836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.585856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.585867 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.688035 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:27 crc kubenswrapper[4856]: E1203 09:13:27.688224 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.689381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.689484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.689508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.689553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.689569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.792398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.792470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.792484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.792513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.792535 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.895085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.895141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.895186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.895201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.895212 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.997753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.997801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.997828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.997846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:27 crc kubenswrapper[4856]: I1203 09:13:27.997857 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:27Z","lastTransitionTime":"2025-12-03T09:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.101141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.101208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.101218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.101235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.101247 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.204381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.204463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.204474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.204493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.204508 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.307930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.307981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.307992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.308011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.308022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.411207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.411262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.411282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.411306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.411320 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.514178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.514218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.514228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.514245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.514255 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.617065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.617121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.617134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.617156 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.617172 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.690113 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.690209 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.690126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:28 crc kubenswrapper[4856]: E1203 09:13:28.690320 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:28 crc kubenswrapper[4856]: E1203 09:13:28.690469 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:28 crc kubenswrapper[4856]: E1203 09:13:28.690575 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.720490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.720548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.720561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.720585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.720601 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.823382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.823439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.823450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.823471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.823487 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.926855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.926933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.926945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.926970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:28 crc kubenswrapper[4856]: I1203 09:13:28.926982 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:28Z","lastTransitionTime":"2025-12-03T09:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.030443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.030499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.030507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.030522 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.030536 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.133696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.134305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.134321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.134348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.134362 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.237202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.237265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.237276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.237296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.237308 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.340691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.340748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.340758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.340779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.340796 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.443619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.443693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.443704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.443724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.443737 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.546711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.546759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.546768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.546784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.546829 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.650112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.650175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.650189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.650215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.650230 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.688090 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:29 crc kubenswrapper[4856]: E1203 09:13:29.688277 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.753029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.753081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.753102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.753129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.753144 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.856004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.856070 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.856093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.856216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.856446 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.960377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.960452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.960466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.960497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:29 crc kubenswrapper[4856]: I1203 09:13:29.960511 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:29Z","lastTransitionTime":"2025-12-03T09:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.063947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.063993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.064004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.064021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.064030 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.167861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.167911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.167921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.167949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.167968 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.271624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.271703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.271718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.272088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.272119 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.376793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.376862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.376875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.376894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.376905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.480393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.480450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.480464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.480483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.480495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.583985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.584079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.584099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.584124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.584143 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.686650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.686705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.686721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.686770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.686797 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.688729 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:30 crc kubenswrapper[4856]: E1203 09:13:30.688906 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.688735 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.688979 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:30 crc kubenswrapper[4856]: E1203 09:13:30.689066 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:30 crc kubenswrapper[4856]: E1203 09:13:30.689152 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.790495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.790538 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.790549 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.790566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.790578 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.893714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.893762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.893771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.893793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.893835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.996694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.996749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.996761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.996777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:30 crc kubenswrapper[4856]: I1203 09:13:30.996792 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:30Z","lastTransitionTime":"2025-12-03T09:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.099012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.099062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.099071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.099087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.099103 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.202345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.202401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.202414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.202438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.202453 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.305547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.305611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.305628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.305650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.305662 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.408186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.408245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.408261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.408283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.408297 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.511400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.511452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.511469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.511488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.511501 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.613715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.613766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.613776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.613794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.613829 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.688473 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:31 crc kubenswrapper[4856]: E1203 09:13:31.688673 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.717026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.717078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.717090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.717108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.717121 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.821062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.821110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.821120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.821141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.821153 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.924598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.924644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.924655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.924672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:31 crc kubenswrapper[4856]: I1203 09:13:31.924684 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:31Z","lastTransitionTime":"2025-12-03T09:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.027358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.027430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.027444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.027465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.027478 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.131429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.131479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.131492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.131514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.131529 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.234729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.234835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.234863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.234892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.234914 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.338402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.338454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.338475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.338504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.338528 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.370456 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/0.log" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.370518 4856 generic.go:334] "Generic (PLEG): container finished" podID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" containerID="40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6" exitCode=1 Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.370559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerDied","Data":"40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.371026 4856 scope.go:117] "RemoveContainer" containerID="40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.389605 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.408135 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.427189 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.441183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.441217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.441227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.441242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.441253 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.443184 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.457934 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.475393 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.491916 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.510219 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.528400 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.544250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.544335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.544351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.544378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.544399 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.545135 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.559114 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.574523 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.591459 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.606539 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.625837 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.648110 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.656396 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.656472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.656502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.656528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.656547 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.662684 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.688955 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.689072 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:32 crc kubenswrapper[4856]: E1203 09:13:32.689157 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:32 crc kubenswrapper[4856]: E1203 09:13:32.689312 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.689615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:32 crc kubenswrapper[4856]: E1203 09:13:32.689709 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.711764 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.718030 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.734897 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.752753 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.758647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.758692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.758708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.758730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.758743 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.766392 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.783744 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.799902 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.814400 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.830465 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.844584 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.860055 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.861235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.861299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.861310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.861334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.861347 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.873248 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.884309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.897917 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.916100 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.930927 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.946386 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.962770 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:32Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.964197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.964272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.964285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.964302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:32 crc kubenswrapper[4856]: I1203 09:13:32.964346 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:32Z","lastTransitionTime":"2025-12-03T09:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.067277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.067341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.067354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.067386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.067402 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.171963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.172005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.172014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.172033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.172044 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.275487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.275547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.275557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.275577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.275596 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.377532 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/0.log" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.377669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerStarted","Data":"b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.378213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.378258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.378269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.378289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.378303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.400873 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.418065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.432237 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.449479 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.469951 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.481115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.481149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.481158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.481175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.481190 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.484729 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.499484 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.515082 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.531357 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.545641 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.560336 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.575070 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.583979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.584023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.584036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.584052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.584069 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.593694 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.611954 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.629254 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.646375 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.661059 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.676371 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:33Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.686207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.686285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.686296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.686314 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.686324 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.688518 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:33 crc kubenswrapper[4856]: E1203 09:13:33.688686 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.789767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.789854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.789870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.789894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.789910 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.892613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.892667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.892677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.892693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.892706 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.995468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.995528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.995540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.995580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:33 crc kubenswrapper[4856]: I1203 09:13:33.995595 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:33Z","lastTransitionTime":"2025-12-03T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.053881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.053977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.053992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.054019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.054033 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.069848 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:34Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.075353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.075399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.075409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.075425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.075437 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.090112 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:34Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.094754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.094855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.094870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.094888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.094900 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.108422 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:34Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.113364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.113411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.113420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.113436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.113447 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.128006 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:34Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.132048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.132081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.132089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.132104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.132113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.145995 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:34Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.146320 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.148427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.148491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.148507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.148527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.148540 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.251271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.251342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.251354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.251376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.251388 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.353963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.354024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.354037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.354061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.354075 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.456190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.456238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.456254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.456276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.456287 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.559674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.559743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.559761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.559780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.559791 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.663192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.663252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.663263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.663285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.663297 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.688861 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.689051 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.689247 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.689464 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.689648 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:34 crc kubenswrapper[4856]: E1203 09:13:34.690020 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.766016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.766079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.766090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.766106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.766117 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.869341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.870080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.870126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.870194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.870231 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.973779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.973877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.973893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.973921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:34 crc kubenswrapper[4856]: I1203 09:13:34.973937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:34Z","lastTransitionTime":"2025-12-03T09:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.077251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.077333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.077349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.077378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.077394 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.180465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.180535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.180551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.180574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.180591 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.284243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.284312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.284325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.284348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.284363 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.386149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.386202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.386213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.386233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.386246 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.489637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.489696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.489706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.489727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.489744 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.593883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.593948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.593964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.593987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.594000 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.688253 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:35 crc kubenswrapper[4856]: E1203 09:13:35.688459 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.697096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.697134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.697145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.697162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.697177 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.800284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.800336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.800347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.800368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.800381 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.903377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.903440 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.903465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.903499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:35 crc kubenswrapper[4856]: I1203 09:13:35.903513 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:35Z","lastTransitionTime":"2025-12-03T09:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.006310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.006353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.006364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.006382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.006396 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.109597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.109701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.109725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.109745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.109758 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.213081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.213140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.213151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.213176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.213189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.316305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.316374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.316386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.316412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.316427 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.418474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.418513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.418525 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.418541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.418551 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.521860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.521902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.521913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.521932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.521943 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.624526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.624561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.624571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.624588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.624601 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.688230 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.688269 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:36 crc kubenswrapper[4856]: E1203 09:13:36.688407 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.688477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:36 crc kubenswrapper[4856]: E1203 09:13:36.688580 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:36 crc kubenswrapper[4856]: E1203 09:13:36.688686 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.727888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.727957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.727970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.727993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.728007 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.830971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.831107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.831125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.831146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.831159 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.935404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.936079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.936152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.936199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:36 crc kubenswrapper[4856]: I1203 09:13:36.936228 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:36Z","lastTransitionTime":"2025-12-03T09:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.039331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.039390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.039399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.039415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.039429 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.142609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.142666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.142676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.142697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.142708 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.245448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.245525 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.245546 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.245572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.245592 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.349276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.349350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.349359 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.349375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.349385 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.452873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.452976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.453017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.453044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.453058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.555625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.555707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.555742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.555868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.555895 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.658272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.658346 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.658364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.658387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.658404 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.688578 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:37 crc kubenswrapper[4856]: E1203 09:13:37.688734 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.761691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.761733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.761741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.761758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.761785 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.864371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.864415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.864427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.864445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.864457 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.967766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.967834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.967846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.967869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:37 crc kubenswrapper[4856]: I1203 09:13:37.967882 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:37Z","lastTransitionTime":"2025-12-03T09:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.070036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.070096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.070123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.070142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.070156 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.172578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.172623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.172633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.172649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.172661 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.275459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.275505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.275514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.275529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.275540 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.379089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.379147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.379164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.379187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.379198 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.482071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.482110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.482119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.482138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.482152 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.585386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.585422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.585431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.585447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.585458 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688284 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688331 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688375 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:38 crc kubenswrapper[4856]: E1203 09:13:38.688424 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.688825 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: E1203 09:13:38.689131 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:38 crc kubenswrapper[4856]: E1203 09:13:38.689206 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.689297 4856 scope.go:117] "RemoveContainer" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.792336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.792419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.792435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.792461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.792479 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.896115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.896542 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.896637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.896710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:38 crc kubenswrapper[4856]: I1203 09:13:38.896771 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:38Z","lastTransitionTime":"2025-12-03T09:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.000238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.001226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.001317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.001415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.001490 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.105065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.105407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.105503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.105623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.105794 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.208752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.208842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.208856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.208876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.208888 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.274001 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.274168 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.274274 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:43.274231629 +0000 UTC m=+151.457123930 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.274328 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.274426 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:14:43.274399913 +0000 UTC m=+151.457292384 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.274479 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.274705 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.274842 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 09:14:43.274797094 +0000 UTC m=+151.457689555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.313271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.313323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.313335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.313354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.313367 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.375922 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.375986 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376109 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376109 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376125 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376134 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376141 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376143 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376191 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 09:14:43.376176125 +0000 UTC m=+151.559068426 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.376207 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 09:14:43.376200116 +0000 UTC m=+151.559092417 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.401033 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/2.log" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.405322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.405867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.416229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.416272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.416281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.416301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.416312 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.421672 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.436669 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.449827 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.465210 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.487945 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.504793 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519115 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.519219 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.531990 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.547409 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.569993 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.585763 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.599949 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.617234 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.622717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.622757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.622768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.622782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.622793 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.636655 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.649867 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.674150 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.688686 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:39 crc kubenswrapper[4856]: E1203 09:13:39.688927 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.701855 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.714428 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:39Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.725508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.725539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.725551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.725569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.725581 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.827839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.827874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.827883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.827898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.827910 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.931094 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.931141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.931152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.931181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:39 crc kubenswrapper[4856]: I1203 09:13:39.931195 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:39Z","lastTransitionTime":"2025-12-03T09:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.033774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.033844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.033856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.033872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.033888 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.137920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.137987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.138001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.138023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.138038 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.240897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.240945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.240956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.240972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.240983 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.344914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.344990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.345009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.345036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.345056 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.446833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.446876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.446885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.446904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.446924 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.550692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.550769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.550787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.550830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.550845 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.653484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.653567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.653581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.653607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.653621 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.688760 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:40 crc kubenswrapper[4856]: E1203 09:13:40.688979 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.689276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:40 crc kubenswrapper[4856]: E1203 09:13:40.689349 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.689620 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:40 crc kubenswrapper[4856]: E1203 09:13:40.689687 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.756795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.756909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.756925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.756949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.756967 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.859574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.859624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.859640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.859656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.859667 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.962504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.962552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.962566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.962580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:40 crc kubenswrapper[4856]: I1203 09:13:40.962590 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:40Z","lastTransitionTime":"2025-12-03T09:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.065190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.065279 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.065298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.065322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.065335 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.168266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.168334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.168345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.168366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.168380 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.270713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.270775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.270791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.270845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.270864 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.373988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.374083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.374099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.374132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.374150 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.416952 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/3.log" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.418212 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/2.log" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.422652 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" exitCode=1 Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.422713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.422772 4856 scope.go:117] "RemoveContainer" containerID="6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.423733 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:13:41 crc kubenswrapper[4856]: E1203 09:13:41.424096 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.443022 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.460823 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.475606 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.478447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.478491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.478501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.478516 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.478535 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.492139 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.505362 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.517349 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.533133 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.548640 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.562184 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.575840 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.581248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.581305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.581320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.581343 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.581359 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.593536 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.615209 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:41Z\\\",\\\"message\\\":\\\"daemon-gzk5w\\\\nI1203 09:13:40.936537 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-wfssg\\\\nI1203 09:13:40.936544 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-gzk5w\\\\nI1203 09:13:40.936550 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 09:13:40.936529 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk\\\\nF1203 09:13:40.936574 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:40Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.631095 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.658200 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.671675 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.684868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.684943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.684967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.684996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.685014 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.688032 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:41 crc kubenswrapper[4856]: E1203 09:13:41.688188 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.690048 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.705032 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.727729 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:41Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.787671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.787724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.787734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.787751 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.787760 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.890840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.890917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.890934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.890964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.890987 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.994022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.994079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.994089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.994109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:41 crc kubenswrapper[4856]: I1203 09:13:41.994122 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:41Z","lastTransitionTime":"2025-12-03T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.097606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.097671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.097683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.097704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.097717 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.200442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.200489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.200498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.200515 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.200526 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.302965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.303006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.303015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.303031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.303040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.406476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.406526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.406537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.406556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.406569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.428317 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/3.log" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.510455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.510640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.510658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.510679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.510693 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.616161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.616261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.616275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.616299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.616313 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.688354 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.688550 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:42 crc kubenswrapper[4856]: E1203 09:13:42.688616 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.688675 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:42 crc kubenswrapper[4856]: E1203 09:13:42.688873 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:42 crc kubenswrapper[4856]: E1203 09:13:42.689085 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.705218 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.719689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.719778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.719793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.719889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.719908 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.732673 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.752071 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.765744 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.784053 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.810985 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:41Z\\\",\\\"message\\\":\\\"daemon-gzk5w\\\\nI1203 09:13:40.936537 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-wfssg\\\\nI1203 09:13:40.936544 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-gzk5w\\\\nI1203 09:13:40.936550 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 09:13:40.936529 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk\\\\nF1203 09:13:40.936574 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:40Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.823718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.823769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.823781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.823823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.823835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.827201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.844616 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.859730 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.879300 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.895309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.911476 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.926502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.926552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.926563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.926584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.926598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:42Z","lastTransitionTime":"2025-12-03T09:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.931095 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.948034 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.963481 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.981966 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:42 crc kubenswrapper[4856]: I1203 09:13:42.998196 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:42Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.017573 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:43Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.030602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.030664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.030676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.030700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.030716 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.134144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.134193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.134206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.134225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.134238 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.236835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.236877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.236890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.236908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.236920 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.340372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.340448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.340461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.340485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.340497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.443108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.443581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.443717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.443825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.443899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.547505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.547561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.547574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.547598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.547611 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.650870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.650922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.650933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.650955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.650968 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.688677 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:43 crc kubenswrapper[4856]: E1203 09:13:43.688906 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.702904 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.754485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.754583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.754598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.754619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.754632 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.857766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.857865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.857883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.857911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.857931 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.961040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.961082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.961091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.961104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:43 crc kubenswrapper[4856]: I1203 09:13:43.961113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:43Z","lastTransitionTime":"2025-12-03T09:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.063928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.063973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.063984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.064000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.064012 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.160725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.160763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.160771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.160784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.160794 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.175839 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:44Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.181332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.181393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.181407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.181426 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.181439 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.196491 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:44Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.201766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.201835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.201847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.201866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.201880 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.216481 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:44Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.222074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.222128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.222140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.222159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.222172 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.236697 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:44Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.241369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.241414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.241427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.241452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.241464 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.255740 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:44Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.255905 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.257871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.257912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.257924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.257941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.257953 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.361289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.361357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.361371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.361400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.361419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.464721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.464785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.464801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.464851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.464868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.567529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.567595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.567604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.567619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.567630 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.669730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.669775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.669786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.669817 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.669830 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.688920 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.689054 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.689090 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.688918 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.689215 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:44 crc kubenswrapper[4856]: E1203 09:13:44.689483 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.772899 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.772952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.772963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.772980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.772989 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.876414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.876470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.876486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.876508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.876520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.980399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.980474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.980487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.980507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:44 crc kubenswrapper[4856]: I1203 09:13:44.980520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:44Z","lastTransitionTime":"2025-12-03T09:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.084439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.084496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.084506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.084526 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.084539 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.187886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.187941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.187955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.187972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.187984 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.289996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.290040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.290050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.290062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.290071 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.393960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.394022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.394034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.394055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.394067 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.497648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.498092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.498166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.498247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.498320 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.601297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.601373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.601385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.601406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.601420 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.688378 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:45 crc kubenswrapper[4856]: E1203 09:13:45.688542 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.704267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.704307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.704319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.704335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.704347 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.807216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.807520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.807596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.807689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.807787 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.910268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.910347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.910370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.910660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:45 crc kubenswrapper[4856]: I1203 09:13:45.910677 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:45Z","lastTransitionTime":"2025-12-03T09:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.013557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.013596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.013607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.013621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.013634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.115936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.115996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.116010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.116026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.116037 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.218522 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.218560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.218571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.218585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.218596 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.321172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.321255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.321265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.321280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.321291 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.424953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.424997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.425010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.425029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.425040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.527841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.527886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.527895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.527908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.527918 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.630617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.630659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.630669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.630686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.630695 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.688828 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.688903 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:46 crc kubenswrapper[4856]: E1203 09:13:46.688956 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:46 crc kubenswrapper[4856]: E1203 09:13:46.689035 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.689107 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:46 crc kubenswrapper[4856]: E1203 09:13:46.689304 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.733637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.733677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.733686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.733700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.733709 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.836881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.836926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.836939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.836960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.836975 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.939944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.940008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.940020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.940041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:46 crc kubenswrapper[4856]: I1203 09:13:46.940058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:46Z","lastTransitionTime":"2025-12-03T09:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.042684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.042717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.042726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.042738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.042748 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.145555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.145593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.145602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.145616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.145624 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.248561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.248656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.248670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.248700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.248714 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.351449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.351498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.351513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.351530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.351541 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.454216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.454245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.454256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.454270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.454281 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.556483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.556532 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.556540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.556555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.556566 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.659339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.660084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.660110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.660128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.660138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.688562 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:47 crc kubenswrapper[4856]: E1203 09:13:47.688824 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.762907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.762957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.762967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.762987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.762999 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.867158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.867222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.867236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.867258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.867272 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.969936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.969991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.970005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.970030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:47 crc kubenswrapper[4856]: I1203 09:13:47.970051 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:47Z","lastTransitionTime":"2025-12-03T09:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.073206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.073259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.073269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.073290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.073303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.176991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.177043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.177055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.177072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.177085 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.280700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.280749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.280766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.280794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.280828 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.384554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.384596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.384604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.384619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.384631 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.487231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.487278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.487291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.487308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.487320 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.589895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.589985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.590021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.590042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.590057 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.688180 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.688194 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.688213 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:48 crc kubenswrapper[4856]: E1203 09:13:48.689052 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:48 crc kubenswrapper[4856]: E1203 09:13:48.688741 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:48 crc kubenswrapper[4856]: E1203 09:13:48.689212 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.692091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.692130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.692143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.692162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.692173 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.794362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.794401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.794410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.794424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.794436 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.897649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.897700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.897712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.897728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:48 crc kubenswrapper[4856]: I1203 09:13:48.897742 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:48Z","lastTransitionTime":"2025-12-03T09:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.001600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.001679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.001690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.001708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.001723 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.104944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.105030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.105053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.105085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.105102 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.208063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.208121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.208132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.208150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.208162 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.311920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.312050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.312068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.312094 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.312119 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.414747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.414791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.414826 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.414852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.414867 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.517232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.517290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.517303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.517324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.517338 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.620918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.620961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.620973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.620991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.621002 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.688919 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:49 crc kubenswrapper[4856]: E1203 09:13:49.689177 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.724248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.724283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.724292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.724305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.724402 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.827955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.828023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.828035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.828056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.828070 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.931493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.931543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.931553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.931576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:49 crc kubenswrapper[4856]: I1203 09:13:49.931587 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:49Z","lastTransitionTime":"2025-12-03T09:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.034550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.034616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.034630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.034652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.034668 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.137470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.137530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.137544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.137565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.137578 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.241204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.241247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.241256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.241270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.241291 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.344852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.344886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.344896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.344912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.344924 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.447277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.447328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.447339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.447353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.447365 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.551866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.551932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.551943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.551960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.551970 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.654673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.654711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.654728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.654749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.654762 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.688562 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.688619 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.688583 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:50 crc kubenswrapper[4856]: E1203 09:13:50.688711 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:50 crc kubenswrapper[4856]: E1203 09:13:50.688792 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:50 crc kubenswrapper[4856]: E1203 09:13:50.689083 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.757319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.757394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.757407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.757431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.757446 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.859913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.859951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.859974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.859990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.860001 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.962936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.963003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.963018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.963042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:50 crc kubenswrapper[4856]: I1203 09:13:50.963058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:50Z","lastTransitionTime":"2025-12-03T09:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.066258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.066294 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.066304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.066321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.066333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.169762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.169833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.169846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.169864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.169879 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.272971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.273028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.273041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.273060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.273071 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.376399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.376445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.376454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.376471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.376482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.479364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.479425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.479436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.479456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.479469 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.582551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.582606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.582620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.582638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.582651 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.685627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.685695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.685707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.685729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.685743 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.688868 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:51 crc kubenswrapper[4856]: E1203 09:13:51.689014 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.788746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.788828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.788840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.788860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.788872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.891308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.891358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.891370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.891388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.891399 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.995668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.995744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.995757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.995782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:51 crc kubenswrapper[4856]: I1203 09:13:51.995799 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:51Z","lastTransitionTime":"2025-12-03T09:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.098715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.099479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.099521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.099547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.099566 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.203029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.203086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.203098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.203117 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.203131 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.306095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.306141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.306152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.306175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.306190 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.409040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.409124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.409136 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.409158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.409172 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.512452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.512527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.512543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.512559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.512571 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.616379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.616437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.616460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.616480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.616492 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.689019 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.689049 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.689506 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:52 crc kubenswrapper[4856]: E1203 09:13:52.689509 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:52 crc kubenswrapper[4856]: E1203 09:13:52.689621 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:52 crc kubenswrapper[4856]: E1203 09:13:52.689953 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.706902 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.719431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.719488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.719500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.719518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.719530 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.721066 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.739630 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.762961 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be4ea22e3126530e4c104ce0471970a2bf44a4c1924eb6ae46282a405e1fcfd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:12Z\\\",\\\"message\\\":\\\"try.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cqjvn before timer (time: 2025-12-03 09:13:13.790313855 +0000 UTC m=+1.818178027): skip\\\\nI1203 09:13:12.529056 6469 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529189 6469 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529195 6469 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 09:13:12.529201 6469 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 09:13:12.529205 6469 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 09:13:12.529130 6469 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF1203 09:13:12.529018 6469 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:41Z\\\",\\\"message\\\":\\\"daemon-gzk5w\\\\nI1203 09:13:40.936537 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-wfssg\\\\nI1203 09:13:40.936544 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-gzk5w\\\\nI1203 09:13:40.936550 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 09:13:40.936529 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk\\\\nF1203 09:13:40.936574 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:40Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.778304 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.801072 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.813279 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.822197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.822246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.822256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.822272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.822283 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.828871 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.847615 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.863713 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.881754 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.898525 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.917313 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.924320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.924366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.924378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.924395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.924405 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:52Z","lastTransitionTime":"2025-12-03T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.932925 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.950320 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.966180 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:52 crc kubenswrapper[4856]: I1203 09:13:52.982178 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:52.999959 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:52Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.015468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a172a02e-ec8f-4f51-bd90-65d13a197875\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e61d51f69c6aae8a8d7d305e9b32aa0de96e18ad39db268a555d2b575b7ee69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:53Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.027862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.027934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.027952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.027978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.027996 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.131278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.131339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.131351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.131374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.131400 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.234885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.234946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.234956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.234974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.234985 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.338399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.338451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.338461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.338480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.338493 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.441420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.441464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.441475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.441490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.441501 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.543765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.543837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.543851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.543868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.543880 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.647909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.648211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.648317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.648426 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.648512 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.688593 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:53 crc kubenswrapper[4856]: E1203 09:13:53.688747 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.751390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.751458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.751472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.751491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.751506 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.854925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.854988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.855009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.855035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.855053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.958015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.958063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.958073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.958089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:53 crc kubenswrapper[4856]: I1203 09:13:53.958100 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:53Z","lastTransitionTime":"2025-12-03T09:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.060394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.060432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.060442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.060459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.060469 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.163790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.163871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.163884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.163904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.163917 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.266843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.266907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.266918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.266937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.266948 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.370275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.370328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.370338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.370360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.370374 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.383974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.384028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.384037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.384054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.384065 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.402194 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.407041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.407303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.407381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.407453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.407548 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.424139 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.430474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.430531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.430544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.430563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.430576 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.445923 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.450783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.450884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.450897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.450918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.450930 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.467331 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.473331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.473378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.473391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.473410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.473424 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.488887 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5537c787-d762-4b55-bd38-ab8197889b01\\\",\\\"systemUUID\\\":\\\"d39e84ae-b2cf-4078-935b-5fb0be0ab617\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:54Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.489098 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.491943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.492013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.492028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.492076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.492089 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.595953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.596054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.596066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.596084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.596116 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.688714 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.688836 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.688910 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.689095 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.688663 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:54 crc kubenswrapper[4856]: E1203 09:13:54.689240 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.698601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.698652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.698663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.698679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.698689 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.802489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.802535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.802547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.802567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.802579 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.906394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.906453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.906462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.906482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:54 crc kubenswrapper[4856]: I1203 09:13:54.906494 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:54Z","lastTransitionTime":"2025-12-03T09:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.009670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.009707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.009717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.009732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.009741 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.113475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.113527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.113539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.113571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.113586 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.216537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.216623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.216642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.216669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.216685 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.320129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.320191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.320204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.320225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.320239 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.423210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.424007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.424047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.424076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.424090 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.526747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.526823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.526835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.526850 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.526861 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.629724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.629775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.629791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.629835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.629879 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.688136 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:55 crc kubenswrapper[4856]: E1203 09:13:55.688719 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.688756 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:13:55 crc kubenswrapper[4856]: E1203 09:13:55.689225 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.705393 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a172a02e-ec8f-4f51-bd90-65d13a197875\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e61d51f69c6aae8a8d7d305e9b32aa0de96e18ad39db268a555d2b575b7ee69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.724031 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.732862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.732924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.732941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.732970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.732984 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.750247 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:41Z\\\",\\\"message\\\":\\\"daemon-gzk5w\\\\nI1203 09:13:40.936537 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-wfssg\\\\nI1203 09:13:40.936544 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-gzk5w\\\\nI1203 09:13:40.936550 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 09:13:40.936529 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk\\\\nF1203 09:13:40.936574 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:40Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.764219 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.788067 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.803066 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.816720 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.835892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.835965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.835978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.836006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.836031 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.837770 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.856403 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.874057 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.892595 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.907642 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.923753 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.939865 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.939882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.940023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.940036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.940076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.940089 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:55Z","lastTransitionTime":"2025-12-03T09:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.951599 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.966121 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:55 crc kubenswrapper[4856]: I1203 09:13:55.986125 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:55Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.004494 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.020849 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:56Z is after 2025-08-24T17:21:41Z" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.043639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.043691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.043703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.043722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.043734 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.146408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.146460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.146470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.146487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.146500 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.249761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.249836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.249853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.249871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.249884 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.353635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.353733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.353748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.353775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.353787 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.456781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.456855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.456866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.456885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.456900 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.559900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.559948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.559958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.559978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.559994 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.663088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.663131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.663141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.663157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.663169 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.688115 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.688185 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:56 crc kubenswrapper[4856]: E1203 09:13:56.688260 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.688325 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:56 crc kubenswrapper[4856]: E1203 09:13:56.688476 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:56 crc kubenswrapper[4856]: E1203 09:13:56.688526 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.766163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.766216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.766228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.766249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.766260 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.869328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.869383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.869400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.869420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.869432 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.972499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.972550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.972561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.972579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:56 crc kubenswrapper[4856]: I1203 09:13:56.972592 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:56Z","lastTransitionTime":"2025-12-03T09:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.075336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.075385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.075395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.075411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.075421 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.177977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.178016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.178025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.178037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.178047 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.281683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.281747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.281764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.281786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.281799 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.385165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.385215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.385224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.385241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.385252 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.491113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.491196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.491210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.491235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.491251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.594743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.594797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.594809 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.594849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.594863 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.688431 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:57 crc kubenswrapper[4856]: E1203 09:13:57.688746 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.694083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:57 crc kubenswrapper[4856]: E1203 09:13:57.694283 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:57 crc kubenswrapper[4856]: E1203 09:13:57.694372 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs podName:f9dd0ced-1cd2-4711-b54f-bdce45437d2c nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.694346349 +0000 UTC m=+169.877238650 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs") pod "network-metrics-daemon-cqjvn" (UID: "f9dd0ced-1cd2-4711-b54f-bdce45437d2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.698476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.698545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.698558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.698587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.698602 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.801533 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.801635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.801651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.801694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.801712 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.905265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.905334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.905348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.905368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:57 crc kubenswrapper[4856]: I1203 09:13:57.905381 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:57Z","lastTransitionTime":"2025-12-03T09:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.007623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.007652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.007679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.007692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.007702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.111256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.111312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.111325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.111350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.111364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.214552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.214601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.214620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.214635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.214647 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.317793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.317866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.317876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.317912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.317924 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.421143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.421205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.421218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.421239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.421254 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.525493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.525570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.525589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.525608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.525643 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.628681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.628732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.628744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.628764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.628777 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.688755 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.688857 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:13:58 crc kubenswrapper[4856]: E1203 09:13:58.688968 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:13:58 crc kubenswrapper[4856]: E1203 09:13:58.689076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.689144 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:13:58 crc kubenswrapper[4856]: E1203 09:13:58.689202 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.732701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.732775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.732784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.732835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.732847 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.836647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.836713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.836727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.836749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.836765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.939988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.940036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.940054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.940076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:58 crc kubenswrapper[4856]: I1203 09:13:58.940094 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:58Z","lastTransitionTime":"2025-12-03T09:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.043454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.043550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.043568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.043590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.043626 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.146754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.146905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.146928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.146954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.146971 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.252582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.252647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.252660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.252678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.252689 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.357831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.357910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.357924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.357945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.357957 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.461237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.461304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.461316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.461337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.461351 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.563702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.563813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.564054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.564151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.564181 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.668712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.668769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.668778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.668800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.668814 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.688054 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:13:59 crc kubenswrapper[4856]: E1203 09:13:59.688432 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.771228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.771267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.771278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.771292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.771301 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.873363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.873410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.873424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.873440 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.873454 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.976655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.976746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.976759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.976777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:13:59 crc kubenswrapper[4856]: I1203 09:13:59.976860 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:13:59Z","lastTransitionTime":"2025-12-03T09:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.079250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.079292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.079302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.079319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.079330 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.182147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.182209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.182219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.182237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.182256 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.285001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.285051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.285063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.285081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.285093 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.388370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.388456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.388508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.388544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.388570 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.491553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.491598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.491609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.491626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.491638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.594205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.594328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.594342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.594382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.594394 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.688470 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.688641 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.688791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:00 crc kubenswrapper[4856]: E1203 09:14:00.688783 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:00 crc kubenswrapper[4856]: E1203 09:14:00.688968 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:00 crc kubenswrapper[4856]: E1203 09:14:00.689043 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.697528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.697591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.697603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.697622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.697634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.800302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.800345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.800358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.800374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.800385 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.902977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.903027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.903039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.903056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:00 crc kubenswrapper[4856]: I1203 09:14:00.903068 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:00Z","lastTransitionTime":"2025-12-03T09:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.006753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.006788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.006799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.006870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.006886 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.110095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.110164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.110178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.110199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.110234 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.212797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.212924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.212937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.212960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.212971 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.316539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.316595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.316605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.316626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.316637 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.420301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.420371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.420389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.420411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.420427 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.523147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.523189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.523197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.523212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.523222 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.625611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.625643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.625652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.625665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.625676 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.688506 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:01 crc kubenswrapper[4856]: E1203 09:14:01.689016 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.728439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.728483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.728493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.728514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.728529 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.830936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.830986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.830997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.831013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.831025 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.933964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.934022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.934047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.934067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:01 crc kubenswrapper[4856]: I1203 09:14:01.934079 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:01Z","lastTransitionTime":"2025-12-03T09:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.037332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.037404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.037420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.037442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.037454 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.140536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.140586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.140596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.140612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.140624 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.243723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.243792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.243807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.243846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.243857 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.346992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.347067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.347082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.347107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.347121 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.450561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.450644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.450659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.450686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.450700 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.553349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.553397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.553408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.553424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.553434 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.656894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.656973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.657007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.657109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.657213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.689128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:02 crc kubenswrapper[4856]: E1203 09:14:02.689298 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.689497 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:02 crc kubenswrapper[4856]: E1203 09:14:02.689557 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.689830 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:02 crc kubenswrapper[4856]: E1203 09:14:02.689894 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.707493 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13c5951970fdd6d9864a63fb8e2d4008fb6f2dafff09952e7feb71dc735c7d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb395d03861f07477ea04ee2f5a04a0a15f164457698d7333a42db83b6dc12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.723591 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3541a85a-a53e-472a-9323-3bdb8c844e1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8016959393ba0a8b2e9067ab7230991a97426de932229f3bb7b96248b380d077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzrmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gzk5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.741918 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zpk2l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29870646-4fde-4ebe-a3a9-0ef904f1bbaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:32Z\\\",\\\"message\\\":\\\"2025-12-03T09:12:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3\\\\n2025-12-03T09:12:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3a89ffb-ca30-4654-97ee-f2f46e7a76a3 to /host/opt/cni/bin/\\\\n2025-12-03T09:12:47Z [verbose] multus-daemon started\\\\n2025-12-03T09:12:47Z [verbose] Readiness Indicator file check\\\\n2025-12-03T09:13:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:13:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcr6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zpk2l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.756689 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:53Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cqjvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.759511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.759561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.759573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.759591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.759604 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.772149 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e744d29fcf27fbe6f19e07a584254ed37c40265bdee0871284b46fa3dc8d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.787201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wfssg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9a6e0bf-3097-46e3-ad75-087b63c827dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9ceac9598e5a11b44e16de9460cb7641f7eac0b6a56fde6b023a4b32b9cd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wfssg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.804694 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6a463ad-f5fe-4745-b870-1f71e556848f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:13:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6c1d45b7ac7c42a4f5e0719165e7fcf784d049f0af402a6374725b0b471b501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011a61d96eaa7db3138061cb365cab604c62909ad5a0c98a26f5d5f456530d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6db44634be5c6bf252c00c9284bcd9c8fa4ab668c779dc5955188711b0028033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf56c897231f9a417ae36261b52a73247fc1c0eaaa61b94bfab0e40a93613d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.830445 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c98fdc8-8cd3-4fa2-a733-6427e4052ef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T09:12:29Z\\\",\\\"message\\\":\\\"W1203 09:12:17.933387 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1203 09:12:17.933691 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764753137 cert, and key in /tmp/serving-cert-111286548/serving-signer.crt, /tmp/serving-cert-111286548/serving-signer.key\\\\nI1203 09:12:18.440690 1 observer_polling.go:159] Starting file observer\\\\nW1203 09:12:18.443125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1203 09:12:18.443372 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 09:12:18.444110 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-111286548/tls.crt::/tmp/serving-cert-111286548/tls.key\\\\\\\"\\\\nF1203 09:12:28.985723 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.851103 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e74e410-1351-45c6-bf4b-82e853feb902\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f592549fa0bb3f6f539bfb31e158617211ccceabdf536136cc6dbcd43f90e4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b783e9387db9f7a477a8e674c654cc1adaad9a970b119910374bbb85a6dd0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9067e1e6d65674db207613fa36f0cc3f9ce839129cf247ce90379dc6ced9ff5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.862890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.862936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.862945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.862964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.862975 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.868453 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.884210 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.898754 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a172a02e-ec8f-4f51-bd90-65d13a197875\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e61d51f69c6aae8a8d7d305e9b32aa0de96e18ad39db268a555d2b575b7ee69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f90b211d5e1ab892e626e737ca5d3f5e5a30bfdb6a9538b76edccc6c487a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.915448 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.930579 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44dc6de-92c1-468c-b5d3-7c04eda8fbf4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcd4370e5b086ce8db52a950cbb37ea9202752ce3e24a2e6c37c5b0f6afc01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8021effa3bdd1bd05c1733bca792c4ca16b2a2a97bf91da877b854c24fea0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47mw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zmrwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.952598 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eeabf973-b0ea-4fbb-8c02-285e6911c8c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b9a147c8b47cab11a9339cb102b0a08bf1f3294707abb22d7ab030f2e34118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a561f9a68b4ad4b7757706db9bab008eb9799ffe848f7a61013dac042daaf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://133ba58f4376371a703588834ef6a2b3b67b2e0db63623c968444810b729458a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c196c87a2e325e011669f82c4b305fcf95de0278aa0ad16466090b74ac7efecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648ca779088b42038cca134dc487cf72fa98cc0c6dd3ef4e1ecff2ed6344732c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2539ab58f0874753b140e80f4dc084d3c7b9c5cd170a49527a37c74389c363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ad496480dc36efcb4f98ef379e4efd891904f9153735e58f3ad4a64deb0b903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b98c411cc7de99417d8ee25cf3b03dbcb64ef98ed5ac2d65ba42cde15662b1f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.966769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.966814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.966839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.966860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.966881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:02Z","lastTransitionTime":"2025-12-03T09:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.969389 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cab0f02af92f39877145055fd3d892ec871524694c50abfaf0d7389d7a0ad27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.983098 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d49r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0f114f-fadd-4753-8929-4feed01dcf71\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77c481498ca6c77e74ba4a46d41bfe2d69a1ebfce97137ffb2c6d503732e77b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fngxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d49r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:02 crc kubenswrapper[4856]: I1203 09:14:02.999168 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cf8e52-cc64-49f6-93d4-6368ec50e14c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b7743eb13698adb96619287d648992b08b7866889a054093c52ba0008b713a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dfb4eceeab4355c112451ce7f435fdb19ae104cbcdfd29d7131e25cd40869d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b6fa42615aebbba0d71930dabe5443cdf8afdef017d1f718a6dc837b67e5226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aef14a61e7cc5b12ef25bfd0c0a696da6d61196878e95d41df6b2ec2b2052fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dff7ff12af9f341f045d2a7fd1e19c7dc869f67e5e25f9730c132a9385ce05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2c3f6cbbbcebcdfaebdbe172ddd75babb06725815e3ce18fec4d9036d59a15c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f23c629ac48f39ad0b966236e86c7cb5fbde05f62051bfea9aee85477c425707\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvqm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l9h2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:02Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.020389 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0244a363-96f5-4b97-824c-b62d42ecee2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T09:12:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T09:13:41Z\\\",\\\"message\\\":\\\"daemon-gzk5w\\\\nI1203 09:13:40.936537 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-wfssg\\\\nI1203 09:13:40.936544 6859 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-gzk5w\\\\nI1203 09:13:40.936550 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 09:13:40.936529 6859 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk\\\\nF1203 09:13:40.936574 6859 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:13:40Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T09:13:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T09:12:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T09:12:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T09:12:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T09:12:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h2mjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T09:14:03Z is after 2025-08-24T17:21:41Z" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.069784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.069866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.069879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.069899 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.069930 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.172612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.172658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.172670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.172684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.172693 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.275132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.275201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.275213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.275238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.275251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.378207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.378276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.378288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.378309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.378348 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.481132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.481178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.481188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.481204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.481216 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.587354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.587402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.587419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.587437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.587452 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.688312 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:03 crc kubenswrapper[4856]: E1203 09:14:03.688852 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.690040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.690087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.690099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.690115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.690128 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.793609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.793660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.793671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.793691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.793703 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.897470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.897527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.897545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.897570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:03 crc kubenswrapper[4856]: I1203 09:14:03.897590 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:03Z","lastTransitionTime":"2025-12-03T09:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.001145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.001199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.001210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.001229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.001240 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.104948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.105014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.105024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.105049 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.105062 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.207859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.207887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.207895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.207908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.207917 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.310974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.311016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.311027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.311042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.311054 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.414436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.414482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.414495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.414514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.414527 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.509648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.509705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.509744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.509759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.509770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T09:14:04Z","lastTransitionTime":"2025-12-03T09:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.571241 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx"] Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.571640 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.574424 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.574644 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.574776 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.575189 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.620090 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zmrwk" podStartSLOduration=87.620056991 podStartE2EDuration="1m27.620056991s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.591292826 +0000 UTC m=+112.774185137" watchObservedRunningTime="2025-12-03 09:14:04.620056991 +0000 UTC m=+112.802949292" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.620289 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=32.620283757 podStartE2EDuration="32.620283757s" podCreationTimestamp="2025-12-03 09:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.620191395 +0000 UTC m=+112.803083696" watchObservedRunningTime="2025-12-03 09:14:04.620283757 +0000 UTC m=+112.803176058" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.674280 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d49r7" podStartSLOduration=88.674259944 podStartE2EDuration="1m28.674259944s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.651388594 +0000 UTC m=+112.834280895" watchObservedRunningTime="2025-12-03 09:14:04.674259944 +0000 UTC m=+112.857152245" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.674390 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-l9h2m" podStartSLOduration=88.674385017 podStartE2EDuration="1m28.674385017s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.674053389 +0000 UTC m=+112.856945690" watchObservedRunningTime="2025-12-03 09:14:04.674385017 +0000 UTC m=+112.857277318" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.676148 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e30c6a43-a0e8-4913-904a-4636f3f09e10-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.676189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30c6a43-a0e8-4913-904a-4636f3f09e10-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.676217 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.676237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e30c6a43-a0e8-4913-904a-4636f3f09e10-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.676289 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.688857 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.688904 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.688868 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:04 crc kubenswrapper[4856]: E1203 09:14:04.689084 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:04 crc kubenswrapper[4856]: E1203 09:14:04.689200 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:04 crc kubenswrapper[4856]: E1203 09:14:04.689513 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.742000 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podStartSLOduration=88.741980152 podStartE2EDuration="1m28.741980152s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.741589892 +0000 UTC m=+112.924482193" watchObservedRunningTime="2025-12-03 09:14:04.741980152 +0000 UTC m=+112.924872453" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.772533 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zpk2l" podStartSLOduration=88.772496173 podStartE2EDuration="1m28.772496173s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.758089995 +0000 UTC m=+112.940982296" watchObservedRunningTime="2025-12-03 09:14:04.772496173 +0000 UTC m=+112.955388474" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e30c6a43-a0e8-4913-904a-4636f3f09e10-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30c6a43-a0e8-4913-904a-4636f3f09e10-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777565 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777590 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e30c6a43-a0e8-4913-904a-4636f3f09e10-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.777737 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e30c6a43-a0e8-4913-904a-4636f3f09e10-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.778683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e30c6a43-a0e8-4913-904a-4636f3f09e10-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.785159 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30c6a43-a0e8-4913-904a-4636f3f09e10-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.802280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e30c6a43-a0e8-4913-904a-4636f3f09e10-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gv8tx\" (UID: \"e30c6a43-a0e8-4913-904a-4636f3f09e10\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.818748 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wfssg" podStartSLOduration=88.818723696 podStartE2EDuration="1m28.818723696s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.804304368 +0000 UTC m=+112.987196669" watchObservedRunningTime="2025-12-03 09:14:04.818723696 +0000 UTC m=+113.001615997" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.852729 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.852702288 podStartE2EDuration="1m29.852702288s" podCreationTimestamp="2025-12-03 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.852653017 +0000 UTC m=+113.035545318" watchObservedRunningTime="2025-12-03 09:14:04.852702288 +0000 UTC m=+113.035594589" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.853660 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.853654323 podStartE2EDuration="55.853654323s" podCreationTimestamp="2025-12-03 09:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.835353253 +0000 UTC m=+113.018245554" watchObservedRunningTime="2025-12-03 09:14:04.853654323 +0000 UTC m=+113.036546614" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.869473 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.869453278 podStartE2EDuration="1m29.869453278s" podCreationTimestamp="2025-12-03 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.868452762 +0000 UTC m=+113.051345063" watchObservedRunningTime="2025-12-03 09:14:04.869453278 +0000 UTC m=+113.052345579" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.888394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" Dec 03 09:14:04 crc kubenswrapper[4856]: I1203 09:14:04.918484 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.918444804 podStartE2EDuration="21.918444804s" podCreationTimestamp="2025-12-03 09:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:04.915070246 +0000 UTC m=+113.097962547" watchObservedRunningTime="2025-12-03 09:14:04.918444804 +0000 UTC m=+113.101337105" Dec 03 09:14:05 crc kubenswrapper[4856]: I1203 09:14:05.531858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" event={"ID":"e30c6a43-a0e8-4913-904a-4636f3f09e10","Type":"ContainerStarted","Data":"268955f27d4e3da11179ec6d69778dd1422f5df1baa1025b4f64a1c44a5b1549"} Dec 03 09:14:05 crc kubenswrapper[4856]: I1203 09:14:05.532213 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" event={"ID":"e30c6a43-a0e8-4913-904a-4636f3f09e10","Type":"ContainerStarted","Data":"7e16a25cfc48413008bd8d606fdd8a4951f20ad20be278949d3f33175ac9c339"} Dec 03 09:14:05 crc kubenswrapper[4856]: I1203 09:14:05.551213 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gv8tx" podStartSLOduration=89.551187025 podStartE2EDuration="1m29.551187025s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:05.550140968 +0000 UTC m=+113.733033279" watchObservedRunningTime="2025-12-03 09:14:05.551187025 +0000 UTC m=+113.734079366" Dec 03 09:14:05 crc kubenswrapper[4856]: I1203 09:14:05.688397 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:05 crc kubenswrapper[4856]: E1203 09:14:05.688546 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:06 crc kubenswrapper[4856]: I1203 09:14:06.688394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:06 crc kubenswrapper[4856]: I1203 09:14:06.688427 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:06 crc kubenswrapper[4856]: I1203 09:14:06.688541 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:06 crc kubenswrapper[4856]: E1203 09:14:06.688598 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:06 crc kubenswrapper[4856]: E1203 09:14:06.688553 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:06 crc kubenswrapper[4856]: E1203 09:14:06.688797 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:07 crc kubenswrapper[4856]: I1203 09:14:07.688645 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:07 crc kubenswrapper[4856]: E1203 09:14:07.688895 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:07 crc kubenswrapper[4856]: I1203 09:14:07.690471 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:14:07 crc kubenswrapper[4856]: E1203 09:14:07.690713 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h2mjf_openshift-ovn-kubernetes(0244a363-96f5-4b97-824c-b62d42ecee2b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" Dec 03 09:14:08 crc kubenswrapper[4856]: I1203 09:14:08.688706 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:08 crc kubenswrapper[4856]: I1203 09:14:08.688908 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:08 crc kubenswrapper[4856]: E1203 09:14:08.689724 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:08 crc kubenswrapper[4856]: E1203 09:14:08.689769 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:08 crc kubenswrapper[4856]: I1203 09:14:08.688932 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:08 crc kubenswrapper[4856]: E1203 09:14:08.689875 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:09 crc kubenswrapper[4856]: I1203 09:14:09.688317 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:09 crc kubenswrapper[4856]: E1203 09:14:09.688790 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:10 crc kubenswrapper[4856]: I1203 09:14:10.688492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:10 crc kubenswrapper[4856]: I1203 09:14:10.688569 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:10 crc kubenswrapper[4856]: E1203 09:14:10.688645 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:10 crc kubenswrapper[4856]: I1203 09:14:10.688677 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:10 crc kubenswrapper[4856]: E1203 09:14:10.688861 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:10 crc kubenswrapper[4856]: E1203 09:14:10.689015 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:11 crc kubenswrapper[4856]: I1203 09:14:11.688977 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:11 crc kubenswrapper[4856]: E1203 09:14:11.689105 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:12 crc kubenswrapper[4856]: E1203 09:14:12.151352 4856 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 09:14:12 crc kubenswrapper[4856]: I1203 09:14:12.688960 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:12 crc kubenswrapper[4856]: I1203 09:14:12.689026 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:12 crc kubenswrapper[4856]: I1203 09:14:12.689046 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:12 crc kubenswrapper[4856]: E1203 09:14:12.690075 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:12 crc kubenswrapper[4856]: E1203 09:14:12.690152 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:12 crc kubenswrapper[4856]: E1203 09:14:12.690237 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:12 crc kubenswrapper[4856]: E1203 09:14:12.842796 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 09:14:13 crc kubenswrapper[4856]: I1203 09:14:13.688683 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:13 crc kubenswrapper[4856]: E1203 09:14:13.688935 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:14 crc kubenswrapper[4856]: I1203 09:14:14.688477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:14 crc kubenswrapper[4856]: I1203 09:14:14.688548 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:14 crc kubenswrapper[4856]: E1203 09:14:14.688697 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:14 crc kubenswrapper[4856]: E1203 09:14:14.688849 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:14 crc kubenswrapper[4856]: I1203 09:14:14.689133 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:14 crc kubenswrapper[4856]: E1203 09:14:14.689225 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:15 crc kubenswrapper[4856]: I1203 09:14:15.688736 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:15 crc kubenswrapper[4856]: E1203 09:14:15.688963 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:16 crc kubenswrapper[4856]: I1203 09:14:16.688447 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:16 crc kubenswrapper[4856]: I1203 09:14:16.688549 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:16 crc kubenswrapper[4856]: E1203 09:14:16.688594 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:16 crc kubenswrapper[4856]: E1203 09:14:16.688715 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:16 crc kubenswrapper[4856]: I1203 09:14:16.688836 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:16 crc kubenswrapper[4856]: E1203 09:14:16.688981 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:17 crc kubenswrapper[4856]: I1203 09:14:17.688463 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:17 crc kubenswrapper[4856]: E1203 09:14:17.688738 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:17 crc kubenswrapper[4856]: E1203 09:14:17.844195 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.576403 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/1.log" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.577719 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/0.log" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.577794 4856 generic.go:334] "Generic (PLEG): container finished" podID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" containerID="b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440" exitCode=1 Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.577891 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerDied","Data":"b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440"} Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.578049 4856 scope.go:117] "RemoveContainer" containerID="40f710d1cc5d6f92967fd89023612ff8794916f93b98a1022197447ed8f369d6" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.578666 4856 scope.go:117] "RemoveContainer" containerID="b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440" Dec 03 09:14:18 crc kubenswrapper[4856]: E1203 09:14:18.578913 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zpk2l_openshift-multus(29870646-4fde-4ebe-a3a9-0ef904f1bbaa)\"" pod="openshift-multus/multus-zpk2l" podUID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.689140 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.689283 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:18 crc kubenswrapper[4856]: E1203 09:14:18.689337 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:18 crc kubenswrapper[4856]: E1203 09:14:18.689502 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:18 crc kubenswrapper[4856]: I1203 09:14:18.689720 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:18 crc kubenswrapper[4856]: E1203 09:14:18.689876 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:19 crc kubenswrapper[4856]: I1203 09:14:19.585837 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/1.log" Dec 03 09:14:19 crc kubenswrapper[4856]: I1203 09:14:19.688003 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:19 crc kubenswrapper[4856]: E1203 09:14:19.688158 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:20 crc kubenswrapper[4856]: I1203 09:14:20.688913 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:20 crc kubenswrapper[4856]: E1203 09:14:20.689125 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:20 crc kubenswrapper[4856]: I1203 09:14:20.689005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:20 crc kubenswrapper[4856]: I1203 09:14:20.688984 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:20 crc kubenswrapper[4856]: E1203 09:14:20.689503 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:20 crc kubenswrapper[4856]: E1203 09:14:20.689706 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:21 crc kubenswrapper[4856]: I1203 09:14:21.688756 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:21 crc kubenswrapper[4856]: E1203 09:14:21.688943 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:22 crc kubenswrapper[4856]: I1203 09:14:22.687992 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:22 crc kubenswrapper[4856]: E1203 09:14:22.689097 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:22 crc kubenswrapper[4856]: I1203 09:14:22.689147 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:22 crc kubenswrapper[4856]: I1203 09:14:22.689175 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:22 crc kubenswrapper[4856]: E1203 09:14:22.689256 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:22 crc kubenswrapper[4856]: E1203 09:14:22.689429 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:22 crc kubenswrapper[4856]: I1203 09:14:22.691420 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:14:22 crc kubenswrapper[4856]: E1203 09:14:22.844762 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 09:14:23 crc kubenswrapper[4856]: I1203 09:14:23.604849 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/3.log" Dec 03 09:14:23 crc kubenswrapper[4856]: I1203 09:14:23.607969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerStarted","Data":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:14:23 crc kubenswrapper[4856]: I1203 09:14:23.608497 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:14:23 crc kubenswrapper[4856]: I1203 09:14:23.638519 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podStartSLOduration=106.638499268 podStartE2EDuration="1m46.638499268s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:23.63821938 +0000 UTC m=+131.821111691" watchObservedRunningTime="2025-12-03 09:14:23.638499268 +0000 UTC m=+131.821391569" Dec 03 09:14:23 crc kubenswrapper[4856]: I1203 09:14:23.688867 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:23 crc kubenswrapper[4856]: E1203 09:14:23.689084 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:24 crc kubenswrapper[4856]: I1203 09:14:24.067078 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cqjvn"] Dec 03 09:14:24 crc kubenswrapper[4856]: I1203 09:14:24.612381 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:24 crc kubenswrapper[4856]: E1203 09:14:24.613015 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:24 crc kubenswrapper[4856]: I1203 09:14:24.688413 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:24 crc kubenswrapper[4856]: I1203 09:14:24.688454 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:24 crc kubenswrapper[4856]: I1203 09:14:24.688417 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:24 crc kubenswrapper[4856]: E1203 09:14:24.688630 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:24 crc kubenswrapper[4856]: E1203 09:14:24.688752 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:24 crc kubenswrapper[4856]: E1203 09:14:24.688900 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:26 crc kubenswrapper[4856]: I1203 09:14:26.689127 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:26 crc kubenswrapper[4856]: I1203 09:14:26.689181 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:26 crc kubenswrapper[4856]: I1203 09:14:26.689180 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:26 crc kubenswrapper[4856]: E1203 09:14:26.689275 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:26 crc kubenswrapper[4856]: I1203 09:14:26.689335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:26 crc kubenswrapper[4856]: E1203 09:14:26.689442 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:26 crc kubenswrapper[4856]: E1203 09:14:26.689666 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:26 crc kubenswrapper[4856]: E1203 09:14:26.689762 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:27 crc kubenswrapper[4856]: E1203 09:14:27.846984 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 09:14:28 crc kubenswrapper[4856]: I1203 09:14:28.688429 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:28 crc kubenswrapper[4856]: I1203 09:14:28.688477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:28 crc kubenswrapper[4856]: I1203 09:14:28.688499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:28 crc kubenswrapper[4856]: I1203 09:14:28.688597 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:28 crc kubenswrapper[4856]: E1203 09:14:28.688590 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:28 crc kubenswrapper[4856]: E1203 09:14:28.688682 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:28 crc kubenswrapper[4856]: E1203 09:14:28.689076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:28 crc kubenswrapper[4856]: E1203 09:14:28.689159 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:30 crc kubenswrapper[4856]: I1203 09:14:30.688439 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:30 crc kubenswrapper[4856]: E1203 09:14:30.688631 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:30 crc kubenswrapper[4856]: I1203 09:14:30.688732 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:30 crc kubenswrapper[4856]: I1203 09:14:30.688876 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:30 crc kubenswrapper[4856]: I1203 09:14:30.689031 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:30 crc kubenswrapper[4856]: E1203 09:14:30.689286 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:30 crc kubenswrapper[4856]: E1203 09:14:30.689343 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:30 crc kubenswrapper[4856]: E1203 09:14:30.689442 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:30 crc kubenswrapper[4856]: I1203 09:14:30.689958 4856 scope.go:117] "RemoveContainer" containerID="b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440" Dec 03 09:14:31 crc kubenswrapper[4856]: I1203 09:14:31.642461 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/1.log" Dec 03 09:14:31 crc kubenswrapper[4856]: I1203 09:14:31.642561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerStarted","Data":"b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06"} Dec 03 09:14:32 crc kubenswrapper[4856]: I1203 09:14:32.688386 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:32 crc kubenswrapper[4856]: I1203 09:14:32.688566 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:32 crc kubenswrapper[4856]: E1203 09:14:32.690146 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:32 crc kubenswrapper[4856]: I1203 09:14:32.690278 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:32 crc kubenswrapper[4856]: I1203 09:14:32.690335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:32 crc kubenswrapper[4856]: E1203 09:14:32.690403 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:32 crc kubenswrapper[4856]: E1203 09:14:32.690474 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:32 crc kubenswrapper[4856]: E1203 09:14:32.690936 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:32 crc kubenswrapper[4856]: E1203 09:14:32.847623 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 09:14:34 crc kubenswrapper[4856]: I1203 09:14:34.688287 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:34 crc kubenswrapper[4856]: I1203 09:14:34.688345 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:34 crc kubenswrapper[4856]: I1203 09:14:34.688343 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:34 crc kubenswrapper[4856]: I1203 09:14:34.688497 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:34 crc kubenswrapper[4856]: E1203 09:14:34.688497 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:34 crc kubenswrapper[4856]: E1203 09:14:34.688679 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:34 crc kubenswrapper[4856]: E1203 09:14:34.688710 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:34 crc kubenswrapper[4856]: E1203 09:14:34.688779 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:36 crc kubenswrapper[4856]: I1203 09:14:36.688320 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:36 crc kubenswrapper[4856]: I1203 09:14:36.688578 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:36 crc kubenswrapper[4856]: I1203 09:14:36.688618 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:36 crc kubenswrapper[4856]: I1203 09:14:36.688712 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:36 crc kubenswrapper[4856]: E1203 09:14:36.689391 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 09:14:36 crc kubenswrapper[4856]: E1203 09:14:36.689490 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 09:14:36 crc kubenswrapper[4856]: E1203 09:14:36.689582 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cqjvn" podUID="f9dd0ced-1cd2-4711-b54f-bdce45437d2c" Dec 03 09:14:36 crc kubenswrapper[4856]: E1203 09:14:36.689672 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.688547 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.688634 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.688577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.688946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.692316 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.692459 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.692498 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.692965 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.693113 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 09:14:38 crc kubenswrapper[4856]: I1203 09:14:38.693374 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.012543 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.351528 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.351995 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:43 crc kubenswrapper[4856]: E1203 09:14:43.352123 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:16:45.352071218 +0000 UTC m=+273.534963529 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.352358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.353020 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.360114 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.453624 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.453754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.458313 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.458352 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.508310 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.519961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 09:14:43 crc kubenswrapper[4856]: I1203 09:14:43.531789 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:43 crc kubenswrapper[4856]: W1203 09:14:43.752537 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ec3066c5225d44365d96a4cc4b70d4b3a1a0a2f63bd1eff1d0ab28e1e5794d71 WatchSource:0}: Error finding container ec3066c5225d44365d96a4cc4b70d4b3a1a0a2f63bd1eff1d0ab28e1e5794d71: Status 404 returned error can't find the container with id ec3066c5225d44365d96a4cc4b70d4b3a1a0a2f63bd1eff1d0ab28e1e5794d71 Dec 03 09:14:43 crc kubenswrapper[4856]: W1203 09:14:43.757451 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-60a32b65b1f0f24348266d6976297ea392d44147fb35d0904c6421b818d223bd WatchSource:0}: Error finding container 60a32b65b1f0f24348266d6976297ea392d44147fb35d0904c6421b818d223bd: Status 404 returned error can't find the container with id 60a32b65b1f0f24348266d6976297ea392d44147fb35d0904c6421b818d223bd Dec 03 09:14:44 crc kubenswrapper[4856]: I1203 09:14:44.700216 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ec3066c5225d44365d96a4cc4b70d4b3a1a0a2f63bd1eff1d0ab28e1e5794d71"} Dec 03 09:14:44 crc kubenswrapper[4856]: I1203 09:14:44.701187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"60a32b65b1f0f24348266d6976297ea392d44147fb35d0904c6421b818d223bd"} Dec 03 09:14:44 crc kubenswrapper[4856]: I1203 09:14:44.702210 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7fe55c79f88fb22e7134cc334f30f8e719ca102733f437a53665ba2c2ba4bb4c"} Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.557171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.599147 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.599841 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.600533 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hffl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.601119 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.601529 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4k754"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.601987 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.602535 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.602931 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.603387 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.603649 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.607981 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.608028 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.608245 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.609790 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.610505 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.611485 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.611524 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.611528 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.611916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.611707 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.618411 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.619044 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.620663 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.619051 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.621098 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.621180 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.619346 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.623434 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.623657 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.623871 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.624034 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.624237 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.624898 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626084 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626103 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626089 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626254 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626304 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626308 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626821 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626929 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626982 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.626995 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627119 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627248 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627270 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627400 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627533 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dgg9g"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627680 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627851 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627887 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.627986 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.628035 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.628163 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.631220 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.637116 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.637307 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.639142 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.640030 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.656599 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2p8h7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.657624 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.657777 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.657947 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-stb5r"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.658055 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.658451 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.658838 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.660315 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.660939 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.661127 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.661698 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.661861 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.662869 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.663667 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.663717 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq5gl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.664023 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.664192 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.664910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.665407 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.665816 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-55bjb"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.666583 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.669842 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.670472 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.670483 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.670758 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.670834 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.671049 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.671507 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.671730 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.671947 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.671941 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t7bzl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.673085 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.676383 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.676567 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.676708 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.676979 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.677161 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.677171 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.677901 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678077 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678209 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678312 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678362 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678437 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678563 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.678639 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679015 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679217 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679378 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679411 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679604 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679897 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rx8vm"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.679917 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.680087 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.680210 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.680315 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.680368 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.681440 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.681865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683354 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683385 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c06506-5c89-4e8b-92c2-c4886d17b6df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683409 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683428 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683460 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05034d58-7364-4605-af24-4d89a370ed9d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683476 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1da5fdd-2fea-4a18-a378-39fa1c758b79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttg2\" (UniqueName: \"kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683514 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05034d58-7364-4605-af24-4d89a370ed9d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683528 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683543 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh7lw\" (UniqueName: \"kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmrv\" (UniqueName: \"kubernetes.io/projected/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-kube-api-access-hkmrv\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683577 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683605 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn97j\" (UniqueName: \"kubernetes.io/projected/05034d58-7364-4605-af24-4d89a370ed9d-kube-api-access-zn97j\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683622 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88b2g\" (UniqueName: \"kubernetes.io/projected/a1da5fdd-2fea-4a18-a378-39fa1c758b79-kube-api-access-88b2g\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683636 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683651 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p54x\" (UniqueName: \"kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683665 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbjw\" (UniqueName: \"kubernetes.io/projected/f3c06506-5c89-4e8b-92c2-c4886d17b6df-kube-api-access-qpbjw\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683697 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683730 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683745 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-images\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-config\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.683831 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.684290 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.690122 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.690306 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.690665 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.690979 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691093 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691209 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691319 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691439 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691452 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691538 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691648 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691661 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691766 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691854 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691895 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691958 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.691979 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.698265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.716421 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.726078 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.745256 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.748901 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.749584 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.750269 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.750675 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.751686 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.752554 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.753244 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.754091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.754344 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.756014 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.756227 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.756644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d89cf8d901926f45b0dacdbaef463aedda0013403955bf4ef2e129068772fd13"} Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.756770 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.757044 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.757126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.758264 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.762004 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.763356 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.764547 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.765397 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.767385 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.767677 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jm29"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.769589 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.770640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"810b3d73c49ba03ce4b35daf0b3aede100d4426942a9070b6605fdf1920257af"} Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.772953 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.774601 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.777429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2bd13c4c0df6c6bda61157cd70cc622c66601333240e4a60560eb9653f10de6b"} Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.778194 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.783891 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.784749 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.785457 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.785500 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.785820 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787256 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787285 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787316 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787340 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-config\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c06506-5c89-4e8b-92c2-c4886d17b6df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787433 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpd6l\" (UniqueName: \"kubernetes.io/projected/a7d7cbff-7ff4-4512-b946-61cc310f6959-kube-api-access-fpd6l\") pod \"downloads-7954f5f757-dgg9g\" (UID: \"a7d7cbff-7ff4-4512-b946-61cc310f6959\") " pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787461 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787487 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787517 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787544 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787576 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lddnc\" (UniqueName: \"kubernetes.io/projected/ab4d4642-8563-4c33-9f0e-de1826944590-kube-api-access-lddnc\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787686 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4hg\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-kube-api-access-np4hg\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787710 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05034d58-7364-4605-af24-4d89a370ed9d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1da5fdd-2fea-4a18-a378-39fa1c758b79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787797 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttg2\" (UniqueName: \"kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787847 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05034d58-7364-4605-af24-4d89a370ed9d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.787977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d476d62-0430-48f7-85eb-711083570c5e-trusted-ca\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788053 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh7lw\" (UniqueName: \"kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmrv\" (UniqueName: \"kubernetes.io/projected/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-kube-api-access-hkmrv\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d476d62-0430-48f7-85eb-711083570c5e-metrics-tls\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788242 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24da9442-c7df-4afa-9535-ed9ec33c69a4-machine-approver-tls\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjrz4\" (UniqueName: \"kubernetes.io/projected/24da9442-c7df-4afa-9535-ed9ec33c69a4-kube-api-access-tjrz4\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.788966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88b2g\" (UniqueName: \"kubernetes.io/projected/a1da5fdd-2fea-4a18-a378-39fa1c758b79-kube-api-access-88b2g\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789328 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789362 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789396 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn97j\" (UniqueName: \"kubernetes.io/projected/05034d58-7364-4605-af24-4d89a370ed9d-kube-api-access-zn97j\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789436 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p54x\" (UniqueName: \"kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789553 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789609 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789636 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbjw\" (UniqueName: \"kubernetes.io/projected/f3c06506-5c89-4e8b-92c2-c4886d17b6df-kube-api-access-qpbjw\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.789668 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.790438 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.790497 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.790503 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.791332 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.791388 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.791417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.791448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792115 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792121 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792154 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqwp\" (UniqueName: \"kubernetes.io/projected/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-kube-api-access-pgqwp\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-auth-proxy-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.792978 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.793045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-images\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.793082 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.793595 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.794422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.794491 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.794568 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1da5fdd-2fea-4a18-a378-39fa1c758b79-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795106 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c06506-5c89-4e8b-92c2-c4886d17b6df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795301 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795478 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795709 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-config\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.795869 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4d4642-8563-4c33-9f0e-de1826944590-serving-cert\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.796234 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.796624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05034d58-7364-4605-af24-4d89a370ed9d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.796765 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.796831 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-config\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.798167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.798768 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.800235 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.810523 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c06506-5c89-4e8b-92c2-c4886d17b6df-images\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.812624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05034d58-7364-4605-af24-4d89a370ed9d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.812817 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.813302 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sjdhc"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.814645 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.814978 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.817332 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.817416 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.818683 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.828618 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.830635 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.830841 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.840385 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.842313 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.844783 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hffl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.845434 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.847232 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.849041 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4k754"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.850764 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.852571 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rmzl5"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.853672 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.853789 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.855226 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2p8h7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.857719 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.858665 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.860351 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.862715 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.862889 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.864948 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dgg9g"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.866327 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.869309 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.872578 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.874279 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rx8vm"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.876045 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.877290 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.878654 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.880177 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jm29"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.881437 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.881980 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.883024 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.884584 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq5gl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.886622 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-stb5r"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.888469 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.889961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-55bjb"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.892281 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.894539 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wg2wf"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.895943 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.896160 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897001 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-config\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897040 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpd6l\" (UniqueName: \"kubernetes.io/projected/a7d7cbff-7ff4-4512-b946-61cc310f6959-kube-api-access-fpd6l\") pod \"downloads-7954f5f757-dgg9g\" (UID: \"a7d7cbff-7ff4-4512-b946-61cc310f6959\") " pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897068 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897091 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lddnc\" (UniqueName: \"kubernetes.io/projected/ab4d4642-8563-4c33-9f0e-de1826944590-kube-api-access-lddnc\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4hg\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-kube-api-access-np4hg\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897249 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897324 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d476d62-0430-48f7-85eb-711083570c5e-trusted-ca\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897402 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d476d62-0430-48f7-85eb-711083570c5e-metrics-tls\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897420 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24da9442-c7df-4afa-9535-ed9ec33c69a4-machine-approver-tls\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjrz4\" (UniqueName: \"kubernetes.io/projected/24da9442-c7df-4afa-9535-ed9ec33c69a4-kube-api-access-tjrz4\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897531 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897555 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897623 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897641 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgqwp\" (UniqueName: \"kubernetes.io/projected/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-kube-api-access-pgqwp\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-auth-proxy-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4d4642-8563-4c33-9f0e-de1826944590-serving-cert\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897698 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897735 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.897755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.898153 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.898271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-config\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.898473 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.898855 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.898937 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24da9442-c7df-4afa-9535-ed9ec33c69a4-auth-proxy-config\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.899597 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.900159 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.900290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.900929 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.901438 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.902743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4d4642-8563-4c33-9f0e-de1826944590-service-ca-bundle\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.904012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d476d62-0430-48f7-85eb-711083570c5e-trusted-ca\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.904382 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24da9442-c7df-4afa-9535-ed9ec33c69a4-machine-approver-tls\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.904639 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d476d62-0430-48f7-85eb-711083570c5e-metrics-tls\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.904833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.904987 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.905442 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.906560 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.906683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.907333 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.907399 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.907553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.909298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.910273 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4d4642-8563-4c33-9f0e-de1826944590-serving-cert\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.911349 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t7bzl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.917497 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.918951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.920541 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.922689 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.922792 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.927037 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.927420 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.938789 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.944333 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rmzl5"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.945088 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.945395 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.946242 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wg2wf"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.949817 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.959159 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dt8dl"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.960693 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.963751 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.967027 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5fgm9"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.974747 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5fgm9"] Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.974911 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:45 crc kubenswrapper[4856]: I1203 09:14:45.984257 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.003009 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.022706 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.043214 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.062068 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.083524 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.102678 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.123290 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.142594 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.162794 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.222510 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.243250 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.262487 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.282470 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.302784 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.323604 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.342206 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.362941 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.384213 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.403532 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.423273 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.443532 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.463597 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.483569 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.502724 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.522510 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.550794 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.562258 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.582777 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.603442 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.623447 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.643123 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.663571 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.683382 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.703391 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.723044 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.742898 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.763640 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.780246 4856 request.go:700] Waited for 1.010155658s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.782999 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.803736 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.823270 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.843841 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.863636 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.882445 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.903116 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.922779 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.943983 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.962963 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 09:14:46 crc kubenswrapper[4856]: I1203 09:14:46.982471 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.002825 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.038393 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttg2\" (UniqueName: \"kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2\") pod \"console-f9d7485db-n2k6t\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.058943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh7lw\" (UniqueName: \"kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw\") pod \"controller-manager-879f6c89f-fm6l9\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.080797 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p54x\" (UniqueName: \"kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x\") pod \"route-controller-manager-6576b87f9c-k2n8f\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.097850 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88b2g\" (UniqueName: \"kubernetes.io/projected/a1da5fdd-2fea-4a18-a378-39fa1c758b79-kube-api-access-88b2g\") pod \"cluster-samples-operator-665b6dd947-qt7j7\" (UID: \"a1da5fdd-2fea-4a18-a378-39fa1c758b79\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.122308 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.122946 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbjw\" (UniqueName: \"kubernetes.io/projected/f3c06506-5c89-4e8b-92c2-c4886d17b6df-kube-api-access-qpbjw\") pod \"machine-api-operator-5694c8668f-4k754\" (UID: \"f3c06506-5c89-4e8b-92c2-c4886d17b6df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.138592 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn97j\" (UniqueName: \"kubernetes.io/projected/05034d58-7364-4605-af24-4d89a370ed9d-kube-api-access-zn97j\") pod \"openshift-controller-manager-operator-756b6f6bc6-zf885\" (UID: \"05034d58-7364-4605-af24-4d89a370ed9d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.158966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmrv\" (UniqueName: \"kubernetes.io/projected/e731cbd6-1613-4aaf-8b66-10ed4143c1c9-kube-api-access-hkmrv\") pod \"multus-admission-controller-857f4d67dd-9hffl\" (UID: \"e731cbd6-1613-4aaf-8b66-10ed4143c1c9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.160281 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.163091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.182771 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.203953 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.226772 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.227721 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.235135 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.239236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.242656 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.275417 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.309553 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.313349 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.313495 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.313673 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.323064 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.363691 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.418121 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.418473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.418612 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.421491 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.444191 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.462870 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.483537 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.504979 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.523077 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.543836 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.656729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.659671 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.661057 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.661091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.662251 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.668572 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.853030 4856 request.go:700] Waited for 1.955387261s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.858564 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.859338 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.882409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5\") pod \"oauth-openshift-558db77b4-2p8h7\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.884070 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lddnc\" (UniqueName: \"kubernetes.io/projected/ab4d4642-8563-4c33-9f0e-de1826944590-kube-api-access-lddnc\") pod \"authentication-operator-69f744f599-tq5gl\" (UID: \"ab4d4642-8563-4c33-9f0e-de1826944590\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.887007 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpd6l\" (UniqueName: \"kubernetes.io/projected/a7d7cbff-7ff4-4512-b946-61cc310f6959-kube-api-access-fpd6l\") pod \"downloads-7954f5f757-dgg9g\" (UID: \"a7d7cbff-7ff4-4512-b946-61cc310f6959\") " pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.887453 4856 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.889465 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgqwp\" (UniqueName: \"kubernetes.io/projected/33d2e5b3-0b33-4671-a4d6-cb0c463a6d93-kube-api-access-pgqwp\") pod \"openshift-apiserver-operator-796bbdcf4f-nrgr6\" (UID: \"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.891334 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjrz4\" (UniqueName: \"kubernetes.io/projected/24da9442-c7df-4afa-9535-ed9ec33c69a4-kube-api-access-tjrz4\") pod \"machine-approver-56656f9798-66b5d\" (UID: \"24da9442-c7df-4afa-9535-ed9ec33c69a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.891762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.892302 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.893703 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4hg\" (UniqueName: \"kubernetes.io/projected/1d476d62-0430-48f7-85eb-711083570c5e-kube-api-access-np4hg\") pod \"ingress-operator-5b745b69d9-tdppw\" (UID: \"1d476d62-0430-48f7-85eb-711083570c5e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.904274 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.923684 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.943848 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.954055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:47 crc kubenswrapper[4856]: I1203 09:14:47.963864 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.009570 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.036823 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.052290 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.062500 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.062799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c24750-bafb-4669-94f3-a82c227cfdcc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063048 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-config\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063132 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063208 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-service-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063314 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-serving-cert\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063384 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc404e1-6730-4d2a-b57d-1a2af56390e5-serving-cert\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-encryption-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063517 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8t5c\" (UniqueName: \"kubernetes.io/projected/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-kube-api-access-l8t5c\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c24750-bafb-4669-94f3-a82c227cfdcc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskln\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-kube-api-access-wskln\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063644 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063718 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qg7p\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063777 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063836 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-encryption-config\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063895 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.063974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-serving-cert\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064039 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11cb3301-096a-4379-aa98-92390b343969-serving-cert\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064143 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064199 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-image-import-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064248 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41569dbf-e024-4bb2-84b8-e56f8d00e389-metrics-tls\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064280 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064318 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-policies\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064374 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11cb3301-096a-4379-aa98-92390b343969-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-client\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064427 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-serving-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064453 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064479 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-serving-cert\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064511 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-trusted-ca\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064536 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit-dir\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064620 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2zg\" (UniqueName: \"kubernetes.io/projected/11cb3301-096a-4379-aa98-92390b343969-kube-api-access-wc2zg\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmc6\" (UniqueName: \"kubernetes.io/projected/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-kube-api-access-8rmc6\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064671 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvftf\" (UniqueName: \"kubernetes.io/projected/dbc404e1-6730-4d2a-b57d-1a2af56390e5-kube-api-access-gvftf\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064698 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzjm\" (UniqueName: \"kubernetes.io/projected/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-kube-api-access-9bzjm\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-node-pullsecrets\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064844 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-config\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064893 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-client\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.064982 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.065007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-dir\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: E1203 09:14:48.068334 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:48.568305997 +0000 UTC m=+156.751198298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.068458 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qwl\" (UniqueName: \"kubernetes.io/projected/41569dbf-e024-4bb2-84b8-e56f8d00e389-kube-api-access-w4qwl\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.069135 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-client\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.096901 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.170474 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.170916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fda50a-7d59-4053-bd6b-0863c51de8e1-config\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171013 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fda50a-7d59-4053-bd6b-0863c51de8e1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41569dbf-e024-4bb2-84b8-e56f8d00e389-metrics-tls\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171097 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-images\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171161 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171182 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639f9569-7171-4a05-b3d0-74d75d49cc20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171292 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rct\" (UniqueName: \"kubernetes.io/projected/4fab7782-323c-4f3f-95ed-ea320135d284-kube-api-access-84rct\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171324 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-policies\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.171386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.219927 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gcb\" (UniqueName: \"kubernetes.io/projected/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-kube-api-access-28gcb\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11cb3301-096a-4379-aa98-92390b343969-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-client\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220112 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-serving-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220185 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-trusted-ca\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220213 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit-dir\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220328 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcxd\" (UniqueName: \"kubernetes.io/projected/433e476c-4ead-4091-a686-2f6059b36947-kube-api-access-xqcxd\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220359 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmc6\" (UniqueName: \"kubernetes.io/projected/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-kube-api-access-8rmc6\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/433e476c-4ead-4091-a686-2f6059b36947-cert\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220462 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzjm\" (UniqueName: \"kubernetes.io/projected/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-kube-api-access-9bzjm\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9x6l\" (UniqueName: \"kubernetes.io/projected/54816846-8b12-4987-9e20-54b377252468-kube-api-access-v9x6l\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220539 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220642 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54816846-8b12-4987-9e20-54b377252468-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220674 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54816846-8b12-4987-9e20-54b377252468-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220907 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-srv-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-config\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.220992 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-webhook-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221149 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mvb\" (UniqueName: \"kubernetes.io/projected/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-kube-api-access-g6mvb\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221184 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639f9569-7171-4a05-b3d0-74d75d49cc20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-encryption-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221287 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-registration-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfbp\" (UniqueName: \"kubernetes.io/projected/0aa057f1-9f16-42e5-be6c-f7712d1e5938-kube-api-access-mrfbp\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: E1203 09:14:48.221469 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:48.721419616 +0000 UTC m=+156.904311917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqgv\" (UniqueName: \"kubernetes.io/projected/6edb5f9d-0476-4676-b340-9d2eceaa42e3-kube-api-access-8jqgv\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c24750-bafb-4669-94f3-a82c227cfdcc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221721 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-mountpoint-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qg7p\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221861 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wskln\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-kube-api-access-wskln\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221896 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-srv-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221936 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-node-bootstrap-token\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.221970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktwx\" (UniqueName: \"kubernetes.io/projected/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-kube-api-access-lktwx\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222035 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8dv8\" (UniqueName: \"kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222147 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-serving-cert\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222213 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52j2\" (UniqueName: \"kubernetes.io/projected/3c2dac10-bf1a-4906-b51a-efe700b59b90-kube-api-access-x52j2\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222246 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-certs\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fab7782-323c-4f3f-95ed-ea320135d284-service-ca-bundle\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222307 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-socket-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9v82\" (UniqueName: \"kubernetes.io/projected/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-kube-api-access-g9v82\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222374 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cwg\" (UniqueName: \"kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-serving-cert\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222461 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-image-import-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222531 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-plugins-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222564 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8967b2f3-208f-425a-9180-1143ec912230-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222591 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a93c4de5-9d7f-404b-b218-57871d7a7dc1-metrics-tls\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222619 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.222647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223558 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11cb3301-096a-4379-aa98-92390b343969-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223686 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7sq\" (UniqueName: \"kubernetes.io/projected/43e4514a-c7b3-4960-a283-510cbdff66e0-kube-api-access-pj7sq\") pod \"migrator-59844c95c7-4ddj5\" (UID: \"43e4514a-c7b3-4960-a283-510cbdff66e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-serving-cert\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223830 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-cabundle\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc2zg\" (UniqueName: \"kubernetes.io/projected/11cb3301-096a-4379-aa98-92390b343969-kube-api-access-wc2zg\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223941 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvftf\" (UniqueName: \"kubernetes.io/projected/dbc404e1-6730-4d2a-b57d-1a2af56390e5-kube-api-access-gvftf\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.223975 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224048 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-config\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-node-pullsecrets\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224112 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c4de5-9d7f-404b-b218-57871d7a7dc1-config-volume\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224155 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-profile-collector-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224193 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8967b2f3-208f-425a-9180-1143ec912230-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-client\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-proxy-tls\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-dir\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224397 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qwl\" (UniqueName: \"kubernetes.io/projected/41569dbf-e024-4bb2-84b8-e56f8d00e389-kube-api-access-w4qwl\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224449 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-client\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224483 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639f9569-7171-4a05-b3d0-74d75d49cc20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c24750-bafb-4669-94f3-a82c227cfdcc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224618 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c2dac10-bf1a-4906-b51a-efe700b59b90-proxy-tls\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224648 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzt89\" (UniqueName: \"kubernetes.io/projected/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-kube-api-access-rzt89\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-metrics-certs\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-csi-data-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224781 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxbl\" (UniqueName: \"kubernetes.io/projected/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-kube-api-access-2lxbl\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224837 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fda50a-7d59-4053-bd6b-0863c51de8e1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6d7\" (UniqueName: \"kubernetes.io/projected/18006383-7713-4ae1-aac5-2b04d139cad6-kube-api-access-4g6d7\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224908 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-service-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.224990 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-config\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225044 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225081 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-serving-cert\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc404e1-6730-4d2a-b57d-1a2af56390e5-serving-cert\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-stats-auth\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8t5c\" (UniqueName: \"kubernetes.io/projected/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-kube-api-access-l8t5c\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdqh\" (UniqueName: \"kubernetes.io/projected/a93c4de5-9d7f-404b-b218-57871d7a7dc1-kube-api-access-qgdqh\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225462 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-encryption-config\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225563 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-key\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edb5f9d-0476-4676-b340-9d2eceaa42e3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225672 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11cb3301-096a-4379-aa98-92390b343969-serving-cert\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-default-certificate\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225760 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zch7b\" (UniqueName: \"kubernetes.io/projected/9ae7d610-3fd9-4574-aff6-5c36d33abac2-kube-api-access-zch7b\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8967b2f3-208f-425a-9180-1143ec912230-config\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.225843 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-tmpfs\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.232314 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c24750-bafb-4669-94f3-a82c227cfdcc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.234194 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41569dbf-e024-4bb2-84b8-e56f8d00e389-metrics-tls\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.237904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-serving-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.240069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-client\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.240877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.240882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-policies\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: E1203 09:14:48.242648 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:48.742626973 +0000 UTC m=+156.925519274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.249578 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit-dir\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.251905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.266665 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-config\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.267961 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.268309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-service-ca\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.268781 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-audit\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.269416 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.270399 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.270554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbc404e1-6730-4d2a-b57d-1a2af56390e5-trusted-ca\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.275461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.275849 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.276199 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-image-import-ca\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.276370 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-encryption-config\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.277335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c24750-bafb-4669-94f3-a82c227cfdcc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.427366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11cb3301-096a-4379-aa98-92390b343969-serving-cert\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.428271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.429067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-audit-dir\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.431470 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-encryption-config\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.435188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-serving-cert\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.441760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzjm\" (UniqueName: \"kubernetes.io/projected/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-kube-api-access-9bzjm\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.443286 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8t5c\" (UniqueName: \"kubernetes.io/projected/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-kube-api-access-l8t5c\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.445409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-etcd-client\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.448515 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-serving-cert\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.450259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-etcd-client\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.450555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-node-pullsecrets\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.451472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd-config\") pod \"etcd-operator-b45778765-rx8vm\" (UID: \"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.452770 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.454184 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.454294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8dv8\" (UniqueName: \"kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.455298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x52j2\" (UniqueName: \"kubernetes.io/projected/3c2dac10-bf1a-4906-b51a-efe700b59b90-kube-api-access-x52j2\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.455392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-certs\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.455487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-serving-cert\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.455694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fab7782-323c-4f3f-95ed-ea320135d284-service-ca-bundle\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.455721 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-socket-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456037 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456538 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-socket-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9v82\" (UniqueName: \"kubernetes.io/projected/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-kube-api-access-g9v82\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4cwg\" (UniqueName: \"kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: E1203 09:14:48.457870 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:48.957828143 +0000 UTC m=+157.140720444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456754 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.458967 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wskln\" (UniqueName: \"kubernetes.io/projected/98c24750-bafb-4669-94f3-a82c227cfdcc-kube-api-access-wskln\") pod \"cluster-image-registry-operator-dc59b4c8b-99xt8\" (UID: \"98c24750-bafb-4669-94f3-a82c227cfdcc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.605705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-plugins-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.456864 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-plugins-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.605755 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qg7p\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.605481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d04ab91-7b77-4086-87cc-f02f3e57c9c6-serving-cert\") pod \"apiserver-76f77b778f-t7bzl\" (UID: \"6d04ab91-7b77-4086-87cc-f02f3e57c9c6\") " pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606640 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8967b2f3-208f-425a-9180-1143ec912230-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a93c4de5-9d7f-404b-b218-57871d7a7dc1-metrics-tls\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606707 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7sq\" (UniqueName: \"kubernetes.io/projected/43e4514a-c7b3-4960-a283-510cbdff66e0-kube-api-access-pj7sq\") pod \"migrator-59844c95c7-4ddj5\" (UID: \"43e4514a-c7b3-4960-a283-510cbdff66e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606708 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc404e1-6730-4d2a-b57d-1a2af56390e5-serving-cert\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606727 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-cabundle\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606761 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606787 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c4de5-9d7f-404b-b218-57871d7a7dc1-config-volume\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-profile-collector-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8967b2f3-208f-425a-9180-1143ec912230-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmc6\" (UniqueName: \"kubernetes.io/projected/56d86bd1-ade4-4858-ba65-8dd0edd7cf3d-kube-api-access-8rmc6\") pod \"apiserver-7bbb656c7d-bxcf2\" (UID: \"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606858 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-proxy-tls\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.606854 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fab7782-323c-4f3f-95ed-ea320135d284-service-ca-bundle\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.607439 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639f9569-7171-4a05-b3d0-74d75d49cc20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.607585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-metrics-certs\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.608666 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-cabundle\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.611481 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.612970 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a93c4de5-9d7f-404b-b218-57871d7a7dc1-config-volume\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.613047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c2dac10-bf1a-4906-b51a-efe700b59b90-proxy-tls\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.613081 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzt89\" (UniqueName: \"kubernetes.io/projected/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-kube-api-access-rzt89\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.613175 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.626202 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639f9569-7171-4a05-b3d0-74d75d49cc20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.629291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-csi-data-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.629506 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-csi-data-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.629734 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxbl\" (UniqueName: \"kubernetes.io/projected/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-kube-api-access-2lxbl\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.629979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fda50a-7d59-4053-bd6b-0863c51de8e1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6d7\" (UniqueName: \"kubernetes.io/projected/18006383-7713-4ae1-aac5-2b04d139cad6-kube-api-access-4g6d7\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-stats-auth\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630095 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdqh\" (UniqueName: \"kubernetes.io/projected/a93c4de5-9d7f-404b-b218-57871d7a7dc1-kube-api-access-qgdqh\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630123 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-key\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630147 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edb5f9d-0476-4676-b340-9d2eceaa42e3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630167 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-default-certificate\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630184 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-tmpfs\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630205 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zch7b\" (UniqueName: \"kubernetes.io/projected/9ae7d610-3fd9-4574-aff6-5c36d33abac2-kube-api-access-zch7b\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8967b2f3-208f-425a-9180-1143ec912230-config\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630257 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fda50a-7d59-4053-bd6b-0863c51de8e1-config\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630285 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fda50a-7d59-4053-bd6b-0863c51de8e1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-images\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630320 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630363 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639f9569-7171-4a05-b3d0-74d75d49cc20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630378 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28gcb\" (UniqueName: \"kubernetes.io/projected/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-kube-api-access-28gcb\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84rct\" (UniqueName: \"kubernetes.io/projected/4fab7782-323c-4f3f-95ed-ea320135d284-kube-api-access-84rct\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqcxd\" (UniqueName: \"kubernetes.io/projected/433e476c-4ead-4091-a686-2f6059b36947-kube-api-access-xqcxd\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/433e476c-4ead-4091-a686-2f6059b36947-cert\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9x6l\" (UniqueName: \"kubernetes.io/projected/54816846-8b12-4987-9e20-54b377252468-kube-api-access-v9x6l\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54816846-8b12-4987-9e20-54b377252468-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-srv-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54816846-8b12-4987-9e20-54b377252468-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630635 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-config\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6mvb\" (UniqueName: \"kubernetes.io/projected/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-kube-api-access-g6mvb\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630696 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-webhook-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630732 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639f9569-7171-4a05-b3d0-74d75d49cc20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630756 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-registration-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfbp\" (UniqueName: \"kubernetes.io/projected/0aa057f1-9f16-42e5-be6c-f7712d1e5938-kube-api-access-mrfbp\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630845 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jqgv\" (UniqueName: \"kubernetes.io/projected/6edb5f9d-0476-4676-b340-9d2eceaa42e3-kube-api-access-8jqgv\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630872 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-mountpoint-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630895 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-srv-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-node-bootstrap-token\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.630958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktwx\" (UniqueName: \"kubernetes.io/projected/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-kube-api-access-lktwx\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.639833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8967b2f3-208f-425a-9180-1143ec912230-config\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.640041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.640969 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-config\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.642573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-mountpoint-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:48 crc kubenswrapper[4856]: I1203 09:14:48.645796 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fda50a-7d59-4053-bd6b-0863c51de8e1-config\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.041589 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edb5f9d-0476-4676-b340-9d2eceaa42e3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.042108 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-stats-auth\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.042608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.043103 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.043611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-proxy-tls\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.044480 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvftf\" (UniqueName: \"kubernetes.io/projected/dbc404e1-6730-4d2a-b57d-1a2af56390e5-kube-api-access-gvftf\") pod \"console-operator-58897d9998-55bjb\" (UID: \"dbc404e1-6730-4d2a-b57d-1a2af56390e5\") " pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.044777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8967b2f3-208f-425a-9180-1143ec912230-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.045188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.045542 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c2dac10-bf1a-4906-b51a-efe700b59b90-proxy-tls\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.052944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.053163 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-webhook-cert\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.053429 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-serving-cert\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.053773 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qwl\" (UniqueName: \"kubernetes.io/projected/41569dbf-e024-4bb2-84b8-e56f8d00e389-kube-api-access-w4qwl\") pod \"dns-operator-744455d44c-stb5r\" (UID: \"41569dbf-e024-4bb2-84b8-e56f8d00e389\") " pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.054262 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9v82\" (UniqueName: \"kubernetes.io/projected/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-kube-api-access-g9v82\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.054308 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.054773 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-profile-collector-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.055315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a9769c-77a1-42ca-bd81-446d2ebcd4c6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ct5kt\" (UID: \"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.056828 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqcxd\" (UniqueName: \"kubernetes.io/projected/433e476c-4ead-4091-a686-2f6059b36947-kube-api-access-xqcxd\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.057225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a93c4de5-9d7f-404b-b218-57871d7a7dc1-metrics-tls\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.057674 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/433e476c-4ead-4091-a686-2f6059b36947-cert\") pod \"ingress-canary-wg2wf\" (UID: \"433e476c-4ead-4091-a686-2f6059b36947\") " pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.057785 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-certs\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.058001 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-metrics-certs\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.058467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54816846-8b12-4987-9e20-54b377252468-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.058928 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-tmpfs\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.059053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.062836 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.063125 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84rct\" (UniqueName: \"kubernetes.io/projected/4fab7782-323c-4f3f-95ed-ea320135d284-kube-api-access-84rct\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.063646 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fda50a-7d59-4053-bd6b-0863c51de8e1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.064209 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzt89\" (UniqueName: \"kubernetes.io/projected/0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8-kube-api-access-rzt89\") pod \"service-ca-operator-777779d784-ndjc2\" (UID: \"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.065172 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0aa057f1-9f16-42e5-be6c-f7712d1e5938-signing-key\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.065370 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.065391 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-registration-dir\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.065655 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.066279 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8dv8\" (UniqueName: \"kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8\") pod \"marketplace-operator-79b997595-pksqg\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.066322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae7d610-3fd9-4574-aff6-5c36d33abac2-srv-cert\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.068571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52j2\" (UniqueName: \"kubernetes.io/projected/3c2dac10-bf1a-4906-b51a-efe700b59b90-kube-api-access-x52j2\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.071160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc2zg\" (UniqueName: \"kubernetes.io/projected/11cb3301-096a-4379-aa98-92390b343969-kube-api-access-wc2zg\") pod \"openshift-config-operator-7777fb866f-mcbr8\" (UID: \"11cb3301-096a-4379-aa98-92390b343969\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.170847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4cwg\" (UniqueName: \"kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg\") pod \"collect-profiles-29412540-mbjsr\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.173071 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.174180 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.175497 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54816846-8b12-4987-9e20-54b377252468-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.176493 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7sq\" (UniqueName: \"kubernetes.io/projected/43e4514a-c7b3-4960-a283-510cbdff66e0-kube-api-access-pj7sq\") pod \"migrator-59844c95c7-4ddj5\" (UID: \"43e4514a-c7b3-4960-a283-510cbdff66e0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.178540 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.178504842 +0000 UTC m=+158.361397303 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.184116 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.187556 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.199023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c2dac10-bf1a-4906-b51a-efe700b59b90-images\") pod \"machine-config-operator-74547568cd-8jsxv\" (UID: \"3c2dac10-bf1a-4906-b51a-efe700b59b90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.210575 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fab7782-323c-4f3f-95ed-ea320135d284-default-certificate\") pod \"router-default-5444994796-sjdhc\" (UID: \"4fab7782-323c-4f3f-95ed-ea320135d284\") " pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.211543 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8967b2f3-208f-425a-9180-1143ec912230-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dw7cr\" (UID: \"8967b2f3-208f-425a-9180-1143ec912230\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.212349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18006383-7713-4ae1-aac5-2b04d139cad6-node-bootstrap-token\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.217278 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6mvb\" (UniqueName: \"kubernetes.io/projected/7b9e41f7-3b5b-461d-b0d9-a28daec02d37-kube-api-access-g6mvb\") pod \"control-plane-machine-set-operator-78cbb6b69f-85pdd\" (UID: \"7b9e41f7-3b5b-461d-b0d9-a28daec02d37\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.217877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fda50a-7d59-4053-bd6b-0863c51de8e1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qgmxw\" (UID: \"d6fda50a-7d59-4053-bd6b-0863c51de8e1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.218971 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wg2wf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.220471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktwx\" (UniqueName: \"kubernetes.io/projected/0b2fbf18-1b72-4721-9182-3bc74aeed3c6-kube-api-access-lktwx\") pod \"packageserver-d55dfcdfc-29h7h\" (UID: \"0b2fbf18-1b72-4721-9182-3bc74aeed3c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.222028 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jqgv\" (UniqueName: \"kubernetes.io/projected/6edb5f9d-0476-4676-b340-9d2eceaa42e3-kube-api-access-8jqgv\") pod \"package-server-manager-789f6589d5-b2fpx\" (UID: \"6edb5f9d-0476-4676-b340-9d2eceaa42e3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.222450 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28gcb\" (UniqueName: \"kubernetes.io/projected/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-kube-api-access-28gcb\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.222918 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08dc03ba-8b0a-4b5c-8b32-a874fdb195a8-srv-cert\") pod \"olm-operator-6b444d44fb-gmm5d\" (UID: \"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.223362 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639f9569-7171-4a05-b3d0-74d75d49cc20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.224013 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9x6l\" (UniqueName: \"kubernetes.io/projected/54816846-8b12-4987-9e20-54b377252468-kube-api-access-v9x6l\") pod \"kube-storage-version-migrator-operator-b67b599dd-b2qz7\" (UID: \"54816846-8b12-4987-9e20-54b377252468\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.225302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdqh\" (UniqueName: \"kubernetes.io/projected/a93c4de5-9d7f-404b-b218-57871d7a7dc1-kube-api-access-qgdqh\") pod \"dns-default-rmzl5\" (UID: \"a93c4de5-9d7f-404b-b218-57871d7a7dc1\") " pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.227513 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6d7\" (UniqueName: \"kubernetes.io/projected/18006383-7713-4ae1-aac5-2b04d139cad6-kube-api-access-4g6d7\") pod \"machine-config-server-dt8dl\" (UID: \"18006383-7713-4ae1-aac5-2b04d139cad6\") " pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.227942 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dt8dl" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.228931 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxbl\" (UniqueName: \"kubernetes.io/projected/2da94f26-0a24-411f-9e4b-a5f7ac82c15f-kube-api-access-2lxbl\") pod \"csi-hostpathplugin-5fgm9\" (UID: \"2da94f26-0a24-411f-9e4b-a5f7ac82c15f\") " pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.232416 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfbp\" (UniqueName: \"kubernetes.io/projected/0aa057f1-9f16-42e5-be6c-f7712d1e5938-kube-api-access-mrfbp\") pod \"service-ca-9c57cc56f-8jm29\" (UID: \"0aa057f1-9f16-42e5-be6c-f7712d1e5938\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.232779 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zch7b\" (UniqueName: \"kubernetes.io/projected/9ae7d610-3fd9-4574-aff6-5c36d33abac2-kube-api-access-zch7b\") pod \"catalog-operator-68c6474976-lqvvn\" (UID: \"9ae7d610-3fd9-4574-aff6-5c36d33abac2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.250207 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.253836 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639f9569-7171-4a05-b3d0-74d75d49cc20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9wdgf\" (UID: \"639f9569-7171-4a05-b3d0-74d75d49cc20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.267863 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.280657 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.303993 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.305315 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:49.80529083 +0000 UTC m=+157.988183321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.347088 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.347151 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.352454 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.372216 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.373422 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.388768 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.437626 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.438906 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.444496 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.444659 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.445557 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.446556 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.451272 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.456943 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:49.95690476 +0000 UTC m=+158.139797061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.461921 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.467154 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.484645 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.485859 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9hffl"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.500227 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.508287 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rmzl5" Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.511167 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" event={"ID":"24da9442-c7df-4afa-9535-ed9ec33c69a4","Type":"ContainerStarted","Data":"25b4f55f63dc59459f40d71e893b0d78c368c5a1e3b992ae942ab60fdcdf71c9"} Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.552050 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" event={"ID":"0ba47662-ffd7-4182-9a06-2f085abcc5e7","Type":"ContainerStarted","Data":"992e7048e8e9c9730df46cf7df7f6e2414c920cce8a6c2ec45e2ebff4010d6f5"} Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.559327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.566523 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.066473667 +0000 UTC m=+158.249365968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.665424 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.665871 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.666684 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.166630926 +0000 UTC m=+158.349523227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.674841 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.768323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.768961 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.268940852 +0000 UTC m=+158.451833153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.869461 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.869617 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.369587733 +0000 UTC m=+158.552480034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.870277 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.870896 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.370868467 +0000 UTC m=+158.553760778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.913303 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.927585 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4k754"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.945194 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885"] Dec 03 09:14:49 crc kubenswrapper[4856]: I1203 09:14:49.971796 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:49 crc kubenswrapper[4856]: E1203 09:14:49.972199 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.472133065 +0000 UTC m=+158.655025366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.074234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.075167 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.575117549 +0000 UTC m=+158.758009840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: W1203 09:14:50.097684 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fab7782_323c_4f3f_95ed_ea320135d284.slice/crio-dfa3b4df8328d354f6b5668c42c3c1fc37573be4f950d6cf2280e85740b1da89 WatchSource:0}: Error finding container dfa3b4df8328d354f6b5668c42c3c1fc37573be4f950d6cf2280e85740b1da89: Status 404 returned error can't find the container with id dfa3b4df8328d354f6b5668c42c3c1fc37573be4f950d6cf2280e85740b1da89 Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.182336 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.182764 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.682719763 +0000 UTC m=+158.865612064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: W1203 09:14:50.259824 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e734ba6_dfdf_406c_a466_579ad773f451.slice/crio-0ae45e18f49e7cbeb05a4939a0d916a3c41b17cf5b2644f51187b262fbfeb3ac WatchSource:0}: Error finding container 0ae45e18f49e7cbeb05a4939a0d916a3c41b17cf5b2644f51187b262fbfeb3ac: Status 404 returned error can't find the container with id 0ae45e18f49e7cbeb05a4939a0d916a3c41b17cf5b2644f51187b262fbfeb3ac Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.283967 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.284593 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.784562937 +0000 UTC m=+158.967455238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.384977 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.385710 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.885691562 +0000 UTC m=+159.068583863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.487856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.488403 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:50.988375097 +0000 UTC m=+159.171267398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.600261 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.601493 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.101476016 +0000 UTC m=+159.284368317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.841367 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.841986 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.341945529 +0000 UTC m=+159.524837830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.944451 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.944792 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.444749168 +0000 UTC m=+159.627641469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:50 crc kubenswrapper[4856]: I1203 09:14:50.944918 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:50 crc kubenswrapper[4856]: E1203 09:14:50.945628 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.445602121 +0000 UTC m=+159.628494592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.082876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.083416 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.583392818 +0000 UTC m=+159.766285129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.092672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" event={"ID":"f3c06506-5c89-4e8b-92c2-c4886d17b6df","Type":"ContainerStarted","Data":"5b4c447de98622df9bc619799157ba8b2ebac6f5bcecaaa3ffa366d897dd744b"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.094652 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" event={"ID":"e731cbd6-1613-4aaf-8b66-10ed4143c1c9","Type":"ContainerStarted","Data":"fe311308308771e9d76c460bf0ad4fb81f96fff6008ca34e35fef1c95aee86fa"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.096686 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" event={"ID":"0ba47662-ffd7-4182-9a06-2f085abcc5e7","Type":"ContainerStarted","Data":"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.098043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" event={"ID":"26dd43e6-7cf4-42d1-9639-70d17ccae700","Type":"ContainerStarted","Data":"69847289fdd61273b13220a5c78555e41a33c5e6642975fa7d024a69e8205fa7"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.098438 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.099218 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" event={"ID":"05034d58-7364-4605-af24-4d89a370ed9d","Type":"ContainerStarted","Data":"7192d8eb88ffd881d454b4e92cb5885030b49d6e7c8535d268d694cf528864e4"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.101103 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k2n8f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.101177 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.102028 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sjdhc" event={"ID":"4fab7782-323c-4f3f-95ed-ea320135d284","Type":"ContainerStarted","Data":"dfa3b4df8328d354f6b5668c42c3c1fc37573be4f950d6cf2280e85740b1da89"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.103372 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2k6t" event={"ID":"5e734ba6-dfdf-406c-a466-579ad773f451","Type":"ContainerStarted","Data":"0ae45e18f49e7cbeb05a4939a0d916a3c41b17cf5b2644f51187b262fbfeb3ac"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.104909 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" event={"ID":"24da9442-c7df-4afa-9535-ed9ec33c69a4","Type":"ContainerStarted","Data":"6c36eb7d29da791ef285770ac5f96b574cee2c1f1cf9e6177284cc48c63e50ce"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.106035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dt8dl" event={"ID":"18006383-7713-4ae1-aac5-2b04d139cad6","Type":"ContainerStarted","Data":"9423342e1434d4934bd18ffe0e4d487dc855bf4e07955f7803707fad4db2fd7b"} Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.185019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.185484 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.685466998 +0000 UTC m=+159.868359299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.385908 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.387483 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.88745413 +0000 UTC m=+160.070346431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.496189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.496881 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:51.996857142 +0000 UTC m=+160.179749443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.738139 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.739080 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.23904661 +0000 UTC m=+160.421938921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.840964 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.841433 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.341417107 +0000 UTC m=+160.524309408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.904141 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" podStartSLOduration=134.904094143 podStartE2EDuration="2m14.904094143s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:51.801249553 +0000 UTC m=+159.984141854" watchObservedRunningTime="2025-12-03 09:14:51.904094143 +0000 UTC m=+160.086986444" Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.905593 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" podStartSLOduration=134.905585262 podStartE2EDuration="2m14.905585262s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:51.902600464 +0000 UTC m=+160.085492765" watchObservedRunningTime="2025-12-03 09:14:51.905585262 +0000 UTC m=+160.088477563" Dec 03 09:14:51 crc kubenswrapper[4856]: I1203 09:14:51.988699 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:51 crc kubenswrapper[4856]: E1203 09:14:51.989066 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.489040043 +0000 UTC m=+160.671932354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.091258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.091847 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.591823601 +0000 UTC m=+160.774715902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.114080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dt8dl" event={"ID":"18006383-7713-4ae1-aac5-2b04d139cad6","Type":"ContainerStarted","Data":"c241db9b44fd94f9f90362050c32a30e126e49163b684d4975c6e85ede588fb3"} Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.117945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" event={"ID":"a1da5fdd-2fea-4a18-a378-39fa1c758b79","Type":"ContainerStarted","Data":"a68b1da3904fbd6826aed46baa796000baa3a8c1d2311668094340621447c159"} Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.135066 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" event={"ID":"26dd43e6-7cf4-42d1-9639-70d17ccae700","Type":"ContainerStarted","Data":"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942"} Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.135160 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.135264 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k2n8f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.135312 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.136146 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fm6l9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.136172 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.193326 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.194592 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.694568139 +0000 UTC m=+160.877460440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.344749 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.345279 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.845259964 +0000 UTC m=+161.028152265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.459997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.460338 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.960279754 +0000 UTC m=+161.143172065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.460543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.461700 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:52.961685381 +0000 UTC m=+161.144577862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.561781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.562049 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.062023215 +0000 UTC m=+161.244915516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.562152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.562537 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.062524618 +0000 UTC m=+161.245416919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.663834 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.664183 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.164135016 +0000 UTC m=+161.347027317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.664521 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.665254 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.165241955 +0000 UTC m=+161.348134256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.759060 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.759160 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.766290 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.766996 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.266966415 +0000 UTC m=+161.449858716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.868441 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.869068 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.369049015 +0000 UTC m=+161.551941316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:52 crc kubenswrapper[4856]: I1203 09:14:52.970515 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:52 crc kubenswrapper[4856]: E1203 09:14:52.971285 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.471259448 +0000 UTC m=+161.654151739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.066958 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dt8dl" podStartSLOduration=8.06694031 podStartE2EDuration="8.06694031s" podCreationTimestamp="2025-12-03 09:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:52.141277009 +0000 UTC m=+160.324169330" watchObservedRunningTime="2025-12-03 09:14:53.06694031 +0000 UTC m=+161.249832611" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.075525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.076037 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.576018488 +0000 UTC m=+161.758910789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.144144 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" event={"ID":"24da9442-c7df-4afa-9535-ed9ec33c69a4","Type":"ContainerStarted","Data":"b4bc74afd956b12292c342c09f11a98b42f0830c2e637483cec6c3d13644aade"} Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.160418 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" event={"ID":"f3c06506-5c89-4e8b-92c2-c4886d17b6df","Type":"ContainerStarted","Data":"571c222b9087f86020b3901236bc3ec1438abe0d1154801c8ec51ca6c1c579e4"} Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.169887 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sjdhc" event={"ID":"4fab7782-323c-4f3f-95ed-ea320135d284","Type":"ContainerStarted","Data":"c3431a4f3778e6435ead66d06aea526e2290ae28c9b35689d69f24d2de7b240e"} Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.177645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.178681 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.678663883 +0000 UTC m=+161.861556184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.275750 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" event={"ID":"e731cbd6-1613-4aaf-8b66-10ed4143c1c9","Type":"ContainerStarted","Data":"7e5ddfdffdac22482a494717f1aafddd2a9472362c48e5b88c945b3c9aef0f82"} Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.288603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.289299 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.789274467 +0000 UTC m=+161.972166768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.295144 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sjdhc" podStartSLOduration=136.29512026 podStartE2EDuration="2m16.29512026s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:53.262670158 +0000 UTC m=+161.445562459" watchObservedRunningTime="2025-12-03 09:14:53.29512026 +0000 UTC m=+161.478012561" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.296022 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66b5d" podStartSLOduration=137.296015194 podStartE2EDuration="2m17.296015194s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:53.290277783 +0000 UTC m=+161.473170104" watchObservedRunningTime="2025-12-03 09:14:53.296015194 +0000 UTC m=+161.478907495" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.307944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" event={"ID":"a1da5fdd-2fea-4a18-a378-39fa1c758b79","Type":"ContainerStarted","Data":"6e756227a02a2aa24b1e2c9a8e2578d315e5ac1aaa5351fa57b900756ccabc4f"} Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.308908 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fm6l9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.309016 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.309175 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k2n8f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.309291 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.422980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.425239 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:53.925205264 +0000 UTC m=+162.108097565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.520102 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.525559 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.526188 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.026164244 +0000 UTC m=+162.209056545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.685965 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.687265 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.187236063 +0000 UTC m=+162.370128364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.787887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.788496 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.28846279 +0000 UTC m=+162.471355211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.890098 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.890516 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.390480659 +0000 UTC m=+162.573372970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:53 crc kubenswrapper[4856]: I1203 09:14:53.890635 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:53 crc kubenswrapper[4856]: E1203 09:14:53.891117 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.391088685 +0000 UTC m=+162.573981166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.082712 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.083439 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.583411264 +0000 UTC m=+162.766303575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.096907 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dgg9g"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.170019 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.172869 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.185408 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.185950 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.685923454 +0000 UTC m=+162.868815765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.293458 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.293736 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.793693454 +0000 UTC m=+162.976585745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.293789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.294300 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.794286569 +0000 UTC m=+162.977178870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.321866 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2p8h7"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.331062 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" event={"ID":"a1da5fdd-2fea-4a18-a378-39fa1c758b79","Type":"ContainerStarted","Data":"ee5c70d1ef226d6825bf8eb356e52538f7ff89db5b3bd9148b58f5afa2c65788"} Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.356456 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2k6t" event={"ID":"5e734ba6-dfdf-406c-a466-579ad773f451","Type":"ContainerStarted","Data":"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30"} Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.370765 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" event={"ID":"05034d58-7364-4605-af24-4d89a370ed9d","Type":"ContainerStarted","Data":"5da2275e9984476ed22e6512bb02595ddc2672087ec7087b70862ceb601063a9"} Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.377320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" event={"ID":"f3c06506-5c89-4e8b-92c2-c4886d17b6df","Type":"ContainerStarted","Data":"c361eb0d33ca323de9513c4e80f995538fefda8da551fa309b90ace69364fed1"} Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.384866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" event={"ID":"e731cbd6-1613-4aaf-8b66-10ed4143c1c9","Type":"ContainerStarted","Data":"25be3f176230f5ad6bcb984e6225c43b12fcfd2f88d5cf188ca6b6cccf36fa85"} Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.400926 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.402329 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:54.902308075 +0000 UTC m=+163.085200376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.415738 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.415842 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.439195 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.557599 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.557996 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.057981162 +0000 UTC m=+163.240873463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.575508 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.575586 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.576736 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tq5gl"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.577978 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qt7j7" podStartSLOduration=138.577945596 podStartE2EDuration="2m18.577945596s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:54.564604086 +0000 UTC m=+162.747496387" watchObservedRunningTime="2025-12-03 09:14:54.577945596 +0000 UTC m=+162.760837917" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.583995 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8"] Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.768771 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.769952 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.269917546 +0000 UTC m=+163.452809857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.774145 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zf885" podStartSLOduration=138.774116886 podStartE2EDuration="2m18.774116886s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:54.773317955 +0000 UTC m=+162.956210256" watchObservedRunningTime="2025-12-03 09:14:54.774116886 +0000 UTC m=+162.957009187" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.865107 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4k754" podStartSLOduration=137.865081414 podStartE2EDuration="2m17.865081414s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:54.843387144 +0000 UTC m=+163.026279445" watchObservedRunningTime="2025-12-03 09:14:54.865081414 +0000 UTC m=+163.047973715" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.871679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.872207 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.372190341 +0000 UTC m=+163.555082642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.888974 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n2k6t" podStartSLOduration=138.8889476 podStartE2EDuration="2m18.8889476s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:54.867882048 +0000 UTC m=+163.050774349" watchObservedRunningTime="2025-12-03 09:14:54.8889476 +0000 UTC m=+163.071839891" Dec 03 09:14:54 crc kubenswrapper[4856]: I1203 09:14:54.976455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:54 crc kubenswrapper[4856]: E1203 09:14:54.976725 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.476708584 +0000 UTC m=+163.659600885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.085253 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.085957 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.585925751 +0000 UTC m=+163.768818052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.186832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.187041 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.686997905 +0000 UTC m=+163.869890206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.187630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.188192 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.688180166 +0000 UTC m=+163.871072467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.291639 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.291909 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.791872908 +0000 UTC m=+163.974765209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.292455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.292988 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.792964787 +0000 UTC m=+163.975857288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.392882 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" event={"ID":"1d476d62-0430-48f7-85eb-711083570c5e","Type":"ContainerStarted","Data":"29fe0006d1aff64e8b61b3990b2a3ff5bbcdee6e5de7d5a0f6681efa4671eb6d"} Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.394159 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dgg9g" event={"ID":"a7d7cbff-7ff4-4512-b946-61cc310f6959","Type":"ContainerStarted","Data":"8e31186d7aae8aec9f37a9a35403fe7d7d193d97204d9a2c17cf6f72be20ac98"} Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.395511 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" event={"ID":"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93","Type":"ContainerStarted","Data":"625ec1b414202634ca25ef208caa8c2f449d9e2cee1d61a0e04d74170df0f1fc"} Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.396661 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" event={"ID":"61769af3-d6a3-42b8-916e-bd4f05ae6b55","Type":"ContainerStarted","Data":"aa5d9e803f195d0f8f75d3d3c0049d7ec8e3d61f8514a1436fc51e594d179231"} Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.400779 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.401228 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:55.901203768 +0000 UTC m=+164.084096069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.412903 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" event={"ID":"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d","Type":"ContainerStarted","Data":"11974535da8f43053ab60eb496a90a26e1bf6f15478d1bad8c1bb67b55dad8ab"} Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.503507 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.508088 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.008066103 +0000 UTC m=+164.190958404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.685043 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.685708 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.185675146 +0000 UTC m=+164.368567447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.787598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.788889 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.288855865 +0000 UTC m=+164.471748166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.888910 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.889288 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.389264421 +0000 UTC m=+164.572156722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.889337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.890017 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.3900046 +0000 UTC m=+164.572896901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:55 crc kubenswrapper[4856]: I1203 09:14:55.990185 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:55 crc kubenswrapper[4856]: E1203 09:14:55.990688 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.490666393 +0000 UTC m=+164.673558694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.092222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.092872 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.592840525 +0000 UTC m=+164.775732916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.194591 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.194988 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.694967266 +0000 UTC m=+164.877859567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.298155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.298550 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.798537925 +0000 UTC m=+164.981430226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.359870 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:14:56 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:14:56 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:14:56 crc kubenswrapper[4856]: healthz check failed Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.359938 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.398948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.399384 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.899332661 +0000 UTC m=+165.082225092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.399548 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.399932 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:56.899904936 +0000 UTC m=+165.082797257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.473103 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:14:56 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:14:56 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:14:56 crc kubenswrapper[4856]: healthz check failed Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.473741 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.484870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" event={"ID":"98c24750-bafb-4669-94f3-a82c227cfdcc","Type":"ContainerStarted","Data":"f93cb2c67c6e7834a66d325621ef896c0cc0d3fc2f284ca773b98045530a6fab"} Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.509169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.510058 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.010025347 +0000 UTC m=+165.192917648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.643319 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.643637 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.143624914 +0000 UTC m=+165.326517215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.644970 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" event={"ID":"61769af3-d6a3-42b8-916e-bd4f05ae6b55","Type":"ContainerStarted","Data":"189e8ea22f4eca31a3dac4d5d205476c446f67e456523aaa16b43e8bbaaf0564"} Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.645286 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.646358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" event={"ID":"ab4d4642-8563-4c33-9f0e-de1826944590","Type":"ContainerStarted","Data":"6eedb34ea96a8a20e158f27cdc2e10e770e4533002c69fb8a01e457f38080132"} Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.660301 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2p8h7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.660394 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.702838 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" podStartSLOduration=140.702796178 podStartE2EDuration="2m20.702796178s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:56.692515788 +0000 UTC m=+164.875408089" watchObservedRunningTime="2025-12-03 09:14:56.702796178 +0000 UTC m=+164.885688479" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.703689 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9hffl" podStartSLOduration=139.703681441 podStartE2EDuration="2m19.703681441s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:54.894499626 +0000 UTC m=+163.077391947" watchObservedRunningTime="2025-12-03 09:14:56.703681441 +0000 UTC m=+164.886573742" Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.745254 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.745520 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.245485298 +0000 UTC m=+165.428377599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.745879 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.747687 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.247675496 +0000 UTC m=+165.430567997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.791241 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d"] Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.847421 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.848150 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.348102152 +0000 UTC m=+165.530994453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:56 crc kubenswrapper[4856]: I1203 09:14:56.955186 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:56 crc kubenswrapper[4856]: E1203 09:14:56.955699 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.455677996 +0000 UTC m=+165.638570297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.025847 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.029531 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.056184 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.056537 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.556520012 +0000 UTC m=+165.739412313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.073558 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-stb5r"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.073622 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.093992 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.103912 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.107763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-55bjb"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.119079 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wg2wf"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.147013 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jm29"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.147098 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rx8vm"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.147180 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.159677 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.160262 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.660242105 +0000 UTC m=+165.843134406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.175569 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.179840 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.215345 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5fgm9"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.215901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.260468 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.261630 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.263146 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.763109956 +0000 UTC m=+165.946002267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.277403 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.310522 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.310587 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.317148 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.321010 4856 patch_prober.go:28] interesting pod/console-f9d7485db-n2k6t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.321239 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.326543 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.347927 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.354735 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.357713 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.359466 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.362038 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rmzl5"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.363946 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-t7bzl"] Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.364714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.367471 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.867449445 +0000 UTC m=+166.050341746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: W1203 09:14:57.417008 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod433e476c_4ead_4091_a686_2f6059b36947.slice/crio-e545952a1b0f6a9d0c9846b578b0c6fb022f428f16ae9560473750612850f83b WatchSource:0}: Error finding container e545952a1b0f6a9d0c9846b578b0c6fb022f428f16ae9560473750612850f83b: Status 404 returned error can't find the container with id e545952a1b0f6a9d0c9846b578b0c6fb022f428f16ae9560473750612850f83b Dec 03 09:14:57 crc kubenswrapper[4856]: W1203 09:14:57.459389 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2dac10_bf1a_4906_b51a_efe700b59b90.slice/crio-935fd889d7f11220963cce93e0676ff5ec75edefeae64e1fd6cee2f8f51440f0 WatchSource:0}: Error finding container 935fd889d7f11220963cce93e0676ff5ec75edefeae64e1fd6cee2f8f51440f0: Status 404 returned error can't find the container with id 935fd889d7f11220963cce93e0676ff5ec75edefeae64e1fd6cee2f8f51440f0 Dec 03 09:14:57 crc kubenswrapper[4856]: W1203 09:14:57.460967 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2a9769c_77a1_42ca_bd81_446d2ebcd4c6.slice/crio-b0fae6a68719d63ce22ab613e3096c73ee761544297f3b376c8f79cf1984a5ce WatchSource:0}: Error finding container b0fae6a68719d63ce22ab613e3096c73ee761544297f3b376c8f79cf1984a5ce: Status 404 returned error can't find the container with id b0fae6a68719d63ce22ab613e3096c73ee761544297f3b376c8f79cf1984a5ce Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.466559 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.467732 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:57.967692667 +0000 UTC m=+166.150584958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.482871 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:14:57 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:14:57 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:14:57 crc kubenswrapper[4856]: healthz check failed Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.483339 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.568828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.569202 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.069186821 +0000 UTC m=+166.252079122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: W1203 09:14:57.589596 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93c4de5_9d7f_404b_b218_57871d7a7dc1.slice/crio-28ed4fe73b102a5fc6be10eab13e2ea56d9be35cee432b6aa6895e15a5f7f8f8 WatchSource:0}: Error finding container 28ed4fe73b102a5fc6be10eab13e2ea56d9be35cee432b6aa6895e15a5f7f8f8: Status 404 returned error can't find the container with id 28ed4fe73b102a5fc6be10eab13e2ea56d9be35cee432b6aa6895e15a5f7f8f8 Dec 03 09:14:57 crc kubenswrapper[4856]: W1203 09:14:57.613047 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6fda50a_7d59_4053_bd6b_0863c51de8e1.slice/crio-e83f0964a96e2aa4a16ebd894de2aeb09a6d32073055d994ffb8d880deab5696 WatchSource:0}: Error finding container e83f0964a96e2aa4a16ebd894de2aeb09a6d32073055d994ffb8d880deab5696: Status 404 returned error can't find the container with id e83f0964a96e2aa4a16ebd894de2aeb09a6d32073055d994ffb8d880deab5696 Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.658933 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" event={"ID":"412eb7b9-351d-44b2-a427-6c26da5d1e39","Type":"ContainerStarted","Data":"59e2109db24d33a808916f9c3f44a9c94518d8801716d58a0365bb937588a554"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.663729 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" event={"ID":"54816846-8b12-4987-9e20-54b377252468","Type":"ContainerStarted","Data":"43b6c7c8fb804be72465ac7b0422b7b7c1bf7f0125d7d3ffdded5f2422d5c6bd"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.670001 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.671224 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.171196049 +0000 UTC m=+166.354088360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.691387 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" event={"ID":"d6fda50a-7d59-4053-bd6b-0863c51de8e1","Type":"ContainerStarted","Data":"e83f0964a96e2aa4a16ebd894de2aeb09a6d32073055d994ffb8d880deab5696"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.743698 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" event={"ID":"639f9569-7171-4a05-b3d0-74d75d49cc20","Type":"ContainerStarted","Data":"0f639a344f18f11064ccf324b9392d56961d43e684b03035c805c0b49deb2216"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.848628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:57 crc kubenswrapper[4856]: E1203 09:14:57.850504 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.350482516 +0000 UTC m=+166.533374817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.851027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-55bjb" event={"ID":"dbc404e1-6730-4d2a-b57d-1a2af56390e5","Type":"ContainerStarted","Data":"bc71c8b330cbd7da4371567a0d7666a0e6e51702084a3c7f9df896bd13b83c02"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.882584 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" event={"ID":"ab4d4642-8563-4c33-9f0e-de1826944590","Type":"ContainerStarted","Data":"6e363c4a3cfd0086b65710cfb47c599fedb54ce6545ca33db833501b9531699e"} Dec 03 09:14:57 crc kubenswrapper[4856]: I1203 09:14:57.925529 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tq5gl" podStartSLOduration=141.925499965 podStartE2EDuration="2m21.925499965s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:57.924265863 +0000 UTC m=+166.107158174" watchObservedRunningTime="2025-12-03 09:14:57.925499965 +0000 UTC m=+166.108392266" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.002960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" event={"ID":"0b2fbf18-1b72-4721-9182-3bc74aeed3c6","Type":"ContainerStarted","Data":"fa9b25c37e96b05e533931079d0bbfa892704a86d849f9b2aa3242a799413682"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.006321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.006607 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" event={"ID":"3c2dac10-bf1a-4906-b51a-efe700b59b90","Type":"ContainerStarted","Data":"935fd889d7f11220963cce93e0676ff5ec75edefeae64e1fd6cee2f8f51440f0"} Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.006778 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.506757528 +0000 UTC m=+166.689649829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.021181 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" event={"ID":"33d2e5b3-0b33-4671-a4d6-cb0c463a6d93","Type":"ContainerStarted","Data":"50c86f70eacb9cb623affc9c67deb9b227780dde1d81452b58154f920eb42e99"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.028129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" event={"ID":"9ae7d610-3fd9-4574-aff6-5c36d33abac2","Type":"ContainerStarted","Data":"8de8f2616304a6a27399de760904250ffb26657d756212e0b2a04c97dbece6e2"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.040615 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" event={"ID":"6d04ab91-7b77-4086-87cc-f02f3e57c9c6","Type":"ContainerStarted","Data":"b7f738eadfd83602459a8fafafb6bec414513456e179f2a19cd64444cd3f224f"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.044263 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" event={"ID":"1d476d62-0430-48f7-85eb-711083570c5e","Type":"ContainerStarted","Data":"3fbc342cd3457775f3e9934cbcf70926de5052f365fa77ed6ef6d9198c3ba73c"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.044297 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" event={"ID":"1d476d62-0430-48f7-85eb-711083570c5e","Type":"ContainerStarted","Data":"32f27f1a8ab881e8ea041076e72b80f9020ba2cc6f21b8453eb2345f2eb288ee"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.069855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" event={"ID":"98c24750-bafb-4669-94f3-a82c227cfdcc","Type":"ContainerStarted","Data":"1ee5ff537437977b2d4cb5b5bf13f1faf6da3eb50482a5aaccf307fad37f2066"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.072996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" event={"ID":"2da94f26-0a24-411f-9e4b-a5f7ac82c15f","Type":"ContainerStarted","Data":"d8c0a0803ebac3cb499cf13f64500edf33148345e0294aacd1ab9ed212c4f037"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.074718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" event={"ID":"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8","Type":"ContainerStarted","Data":"820ac92354fb00b15c59f9f20af3240dd702c04d036ebb91aee483410206ab1e"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.074935 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" event={"ID":"08dc03ba-8b0a-4b5c-8b32-a874fdb195a8","Type":"ContainerStarted","Data":"f87250a78e1757fdac249a5c99b9048d453f80b93225caa0d60513b2015111fa"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.076533 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.078478 4856 generic.go:334] "Generic (PLEG): container finished" podID="56d86bd1-ade4-4858-ba65-8dd0edd7cf3d" containerID="a73e3c88618b41b4f77e0140f64959dcdd12f496e5117c4da3b22973a14a936f" exitCode=0 Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.078565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" event={"ID":"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d","Type":"ContainerDied","Data":"a73e3c88618b41b4f77e0140f64959dcdd12f496e5117c4da3b22973a14a936f"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.080683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rmzl5" event={"ID":"a93c4de5-9d7f-404b-b218-57871d7a7dc1","Type":"ContainerStarted","Data":"28ed4fe73b102a5fc6be10eab13e2ea56d9be35cee432b6aa6895e15a5f7f8f8"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.082854 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" event={"ID":"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd","Type":"ContainerStarted","Data":"de74d0350a83f5a61111c15dbc9a5cb1877325f19642a31fab74ec65f38d54a7"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.093639 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-gmm5d container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.093756 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" podUID="08dc03ba-8b0a-4b5c-8b32-a874fdb195a8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.104666 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nrgr6" podStartSLOduration=142.104628387 podStartE2EDuration="2m22.104628387s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:58.100855818 +0000 UTC m=+166.283748129" watchObservedRunningTime="2025-12-03 09:14:58.104628387 +0000 UTC m=+166.287520698" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.114323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.120373 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.62035586 +0000 UTC m=+166.803248161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.146737 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tdppw" podStartSLOduration=141.146691392 podStartE2EDuration="2m21.146691392s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:58.139365849 +0000 UTC m=+166.322258150" watchObservedRunningTime="2025-12-03 09:14:58.146691392 +0000 UTC m=+166.329583693" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.166755 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dgg9g" event={"ID":"a7d7cbff-7ff4-4512-b946-61cc310f6959","Type":"ContainerStarted","Data":"6efdd9261dbede7bb8e7ed28aae96bcc023f3b1c3932ff323aa2e962db910b6c"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.167902 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.168818 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.168870 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.178361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" event={"ID":"43e4514a-c7b3-4960-a283-510cbdff66e0","Type":"ContainerStarted","Data":"486bebb28fc25ad47231cb0eceb6716172d9b301328748715ea14b88311a3bc6"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.216245 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.216493 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.716428622 +0000 UTC m=+166.899320923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.216614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.217316 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.717307975 +0000 UTC m=+166.900200276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.234536 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" podStartSLOduration=141.234506697 podStartE2EDuration="2m21.234506697s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:58.231929399 +0000 UTC m=+166.414821700" watchObservedRunningTime="2025-12-03 09:14:58.234506697 +0000 UTC m=+166.417398998" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.244571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" event={"ID":"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6","Type":"ContainerStarted","Data":"b0fae6a68719d63ce22ab613e3096c73ee761544297f3b376c8f79cf1984a5ce"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.253025 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" event={"ID":"11cb3301-096a-4379-aa98-92390b343969","Type":"ContainerStarted","Data":"10e128605095ef7d46d9a767e049b3dc62f249ef2639d2cd8944d7f7370de151"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.270349 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-99xt8" podStartSLOduration=141.270330657 podStartE2EDuration="2m21.270330657s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:58.266582419 +0000 UTC m=+166.449474720" watchObservedRunningTime="2025-12-03 09:14:58.270330657 +0000 UTC m=+166.453222958" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.287802 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" event={"ID":"7b9e41f7-3b5b-461d-b0d9-a28daec02d37","Type":"ContainerStarted","Data":"ac208e58f12dd881b9def63691f195a442a216ecd8e70597ddfd36663ea8ad9f"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.322407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.341206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" event={"ID":"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8","Type":"ContainerStarted","Data":"f5da3dee96e0923e90f8b73b38ddb7936ab73460e4ad118995752765989ffff2"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.342413 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" event={"ID":"19d74018-0d57-4f93-a298-64b08e3df414","Type":"ContainerStarted","Data":"04a98d7d5ee7e83f280e7eda7611da32680e7ec6218e6d1962bfbf975eb055fa"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.343351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" event={"ID":"6edb5f9d-0476-4676-b340-9d2eceaa42e3","Type":"ContainerStarted","Data":"8f84f273e18ba4fcead7b5d5326d70361bb376a93656bcec4e1e160b18cc65f7"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.357723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" event={"ID":"41569dbf-e024-4bb2-84b8-e56f8d00e389","Type":"ContainerStarted","Data":"08a58103f4fa5d3acbed220f949097f5e3709f501d126eca2a5e301f22d21c19"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.359741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" event={"ID":"0aa057f1-9f16-42e5-be6c-f7712d1e5938","Type":"ContainerStarted","Data":"be6635dcff896dcdcb669e4064f565eb0c24248079c99d7b71b2758d48faa121"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.360900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" event={"ID":"8967b2f3-208f-425a-9180-1143ec912230","Type":"ContainerStarted","Data":"9259c6da634d98917934878b6243652f041c8f2fde2b224cc2a6b929da9c9dfb"} Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.362042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wg2wf" event={"ID":"433e476c-4ead-4091-a686-2f6059b36947","Type":"ContainerStarted","Data":"e545952a1b0f6a9d0c9846b578b0c6fb022f428f16ae9560473750612850f83b"} Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.402714 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:58.902663451 +0000 UTC m=+167.085555752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.437719 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.517171 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.519746 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.019723014 +0000 UTC m=+167.202615505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.522144 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:14:58 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:14:58 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:14:58 crc kubenswrapper[4856]: healthz check failed Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.522247 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.560591 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dgg9g" podStartSLOduration=142.560558936 podStartE2EDuration="2m22.560558936s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:58.433547462 +0000 UTC m=+166.616439773" watchObservedRunningTime="2025-12-03 09:14:58.560558936 +0000 UTC m=+166.743451237" Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.622635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.623063 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.123032156 +0000 UTC m=+167.305924457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.623203 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.623699 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.123686494 +0000 UTC m=+167.306578795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.725512 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.726516 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.226484692 +0000 UTC m=+167.409376993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.858703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.859685 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.359664798 +0000 UTC m=+167.542557099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:58 crc kubenswrapper[4856]: I1203 09:14:58.961155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:58 crc kubenswrapper[4856]: E1203 09:14:58.961741 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.461708527 +0000 UTC m=+167.644600828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.063566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.063986 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.563970062 +0000 UTC m=+167.746862363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.164127 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.164434 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.664407708 +0000 UTC m=+167.847300009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.285273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.286534 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.786517324 +0000 UTC m=+167.969409625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.388544 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.389089 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.889055806 +0000 UTC m=+168.071948107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.470305 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.486710 4856 generic.go:334] "Generic (PLEG): container finished" podID="11cb3301-096a-4379-aa98-92390b343969" containerID="0fdb9e8150144cf64edec58e134025da5bf8744ab979103a3fbb93043da2c925" exitCode=0 Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.487495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" event={"ID":"11cb3301-096a-4379-aa98-92390b343969","Type":"ContainerDied","Data":"0fdb9e8150144cf64edec58e134025da5bf8744ab979103a3fbb93043da2c925"} Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.489392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.492139 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:14:59 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:14:59 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:14:59 crc kubenswrapper[4856]: healthz check failed Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.492216 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.495250 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:14:59.995222693 +0000 UTC m=+168.178114994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.519717 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" event={"ID":"41569dbf-e024-4bb2-84b8-e56f8d00e389","Type":"ContainerStarted","Data":"0e5e62dae50c9d6895b6c0dcd9aad82eee377787abad59a6b45cb8000a46d3a3"} Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.547791 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" event={"ID":"0a826ca5-e1ed-4746-9bfc-d52ecc7eafd8","Type":"ContainerStarted","Data":"183386c6615ef017b281cce2c84a47d8ee242d81235f1eb0079a40d2ccf8f273"} Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.548101 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gmm5d" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.592052 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.592263 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.09223664 +0000 UTC m=+168.275128941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.593126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.594893 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ndjc2" podStartSLOduration=142.594874929 podStartE2EDuration="2m22.594874929s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:59.593126503 +0000 UTC m=+167.776018804" watchObservedRunningTime="2025-12-03 09:14:59.594874929 +0000 UTC m=+167.777767220" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.596047 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.096030539 +0000 UTC m=+168.278922840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.613060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" event={"ID":"8967b2f3-208f-425a-9180-1143ec912230","Type":"ContainerStarted","Data":"3101339bb6a26c11c3a626d66138729b638397c6e1c9aa899ce11aad760c74b1"} Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.615394 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.615471 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.686967 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dw7cr" podStartSLOduration=142.6733722 podStartE2EDuration="2m22.6733722s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:14:59.669280352 +0000 UTC m=+167.852172653" watchObservedRunningTime="2025-12-03 09:14:59.6733722 +0000 UTC m=+167.856264501" Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.698795 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.698944 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.19891529 +0000 UTC m=+168.381807591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.699138 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.702584 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.202555976 +0000 UTC m=+168.385448547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.803762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.805795 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.305746765 +0000 UTC m=+168.488639246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:14:59 crc kubenswrapper[4856]: I1203 09:14:59.906960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:14:59 crc kubenswrapper[4856]: E1203 09:14:59.907563 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.407545687 +0000 UTC m=+168.590437988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.008022 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.008935 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.508906298 +0000 UTC m=+168.691798609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.009259 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.009729 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.509719919 +0000 UTC m=+168.692612220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.125418 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.125784 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.625770126 +0000 UTC m=+168.808662427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.230539 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.231033 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.731012649 +0000 UTC m=+168.913904950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.335671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.336405 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.836379825 +0000 UTC m=+169.019272126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.438410 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.440094 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:00.940070767 +0000 UTC m=+169.122963068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.540837 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.541090 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.041048628 +0000 UTC m=+169.223940929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.541440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.542017 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.042007063 +0000 UTC m=+169.224899364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.556658 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:00 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:00 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:00 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.556825 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.673355 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.674022 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.173994157 +0000 UTC m=+169.356886458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.789073 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.789525 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.289506169 +0000 UTC m=+169.472398480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.799702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wg2wf" event={"ID":"433e476c-4ead-4091-a686-2f6059b36947","Type":"ContainerStarted","Data":"d0df1ea340c90e478c6e7ea6c99c2ec50d23a9c9ebe8e06799932f60991dc3da"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.809223 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" event={"ID":"19d74018-0d57-4f93-a298-64b08e3df414","Type":"ContainerStarted","Data":"46866ffa3c281b813bdf5adb21e5b265ac2147d2d8671e71126e122081fa86bf"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.812415 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.812589 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pksqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.812658 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.846616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" event={"ID":"0aa057f1-9f16-42e5-be6c-f7712d1e5938","Type":"ContainerStarted","Data":"2bba0c8fb02927283660ce337e9fdd025701b52716d9538238123ce5b4d822b0"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.866851 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" event={"ID":"7b9e41f7-3b5b-461d-b0d9-a28daec02d37","Type":"ContainerStarted","Data":"6378af8024a2893a60881d0f6b0150b62a564d84f6c8221f322cce88072e3e01"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.891963 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:00 crc kubenswrapper[4856]: E1203 09:15:00.894529 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.394444424 +0000 UTC m=+169.577336725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.936575 4856 generic.go:334] "Generic (PLEG): container finished" podID="6d04ab91-7b77-4086-87cc-f02f3e57c9c6" containerID="26082691083f85dc52a8a604b3e2a8e16440e3c224bc2a75eb422db06a6d7e7b" exitCode=0 Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.937391 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" event={"ID":"6d04ab91-7b77-4086-87cc-f02f3e57c9c6","Type":"ContainerDied","Data":"26082691083f85dc52a8a604b3e2a8e16440e3c224bc2a75eb422db06a6d7e7b"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.977195 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rmzl5" event={"ID":"a93c4de5-9d7f-404b-b218-57871d7a7dc1","Type":"ContainerStarted","Data":"be63ac29eb66d0505984cdeb9dfa3bdd93de43b0e2a27f45e4e26547e2341375"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.983425 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" event={"ID":"2da94f26-0a24-411f-9e4b-a5f7ac82c15f","Type":"ContainerStarted","Data":"78fc5ff7c013269832ef37b78f6af7b3a95cce41404ddeaaf67a271bea9f9887"} Dec 03 09:15:00 crc kubenswrapper[4856]: I1203 09:15:00.995024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:00.999978 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.499946164 +0000 UTC m=+169.682838645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.046908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" event={"ID":"54816846-8b12-4987-9e20-54b377252468","Type":"ContainerStarted","Data":"a01ce6d1be24f9867074779368c4aafb3f37e4d00112f3b61c295880a90dde9f"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.096479 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" event={"ID":"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6","Type":"ContainerStarted","Data":"3e86970472490ff357cfb82c6078fbda535dd791c7afc30b6257159f4959e448"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.096741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.098616 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.598586033 +0000 UTC m=+169.781478334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.102572 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" event={"ID":"9ae7d610-3fd9-4574-aff6-5c36d33abac2","Type":"ContainerStarted","Data":"d2dc87ba24dad8ffaaf6308f78031361fc9aecd2de85212924ba1f84b75df036"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.103745 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.104914 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" event={"ID":"6edb5f9d-0476-4676-b340-9d2eceaa42e3","Type":"ContainerStarted","Data":"7c6d662ed5e7df6284714f12cccab28e61e4673a0c215ff2d8f0c4659a29b6b3"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.105996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" event={"ID":"639f9569-7171-4a05-b3d0-74d75d49cc20","Type":"ContainerStarted","Data":"94bf2a540cf1e1556fa88d466d2048ff5e9acd0b465c0c64badd0a516e7fe250"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.107535 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" event={"ID":"412eb7b9-351d-44b2-a427-6c26da5d1e39","Type":"ContainerStarted","Data":"04058e59502aa9032d2f122c8ec5cfeccdae1290c6ce7451de6f44da5bd55aa1"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.109593 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" event={"ID":"0b2fbf18-1b72-4721-9182-3bc74aeed3c6","Type":"ContainerStarted","Data":"d4c71805a55d83f452a9e0db30ccc39b75f62519e22344bbe4313756044b2232"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.110712 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.120112 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-29h7h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.120163 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lqvvn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.120202 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" podUID="0b2fbf18-1b72-4721-9182-3bc74aeed3c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.120247 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" podUID="9ae7d610-3fd9-4574-aff6-5c36d33abac2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.123086 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" event={"ID":"3c2dac10-bf1a-4906-b51a-efe700b59b90","Type":"ContainerStarted","Data":"1a3d7fa871d3f37df60ef6fadc792e1bf9467f429d18f3b2f82127a808d178bf"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.147840 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" event={"ID":"43e4514a-c7b3-4960-a283-510cbdff66e0","Type":"ContainerStarted","Data":"2bedd64184bfcd38b1d9c798fa055aeda824e85f0c223a6c7b0c940044e58869"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.148751 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wg2wf" podStartSLOduration=16.148715009 podStartE2EDuration="16.148715009s" podCreationTimestamp="2025-12-03 09:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:01.145982788 +0000 UTC m=+169.328875109" watchObservedRunningTime="2025-12-03 09:15:01.148715009 +0000 UTC m=+169.331607310" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.164330 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-55bjb" event={"ID":"dbc404e1-6730-4d2a-b57d-1a2af56390e5","Type":"ContainerStarted","Data":"5654e48ec08c0d0a7943f12627faf0f1639bceba32ce6e21aff5f76a3498f08d"} Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.166964 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.167112 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.203773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.206157 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.706137477 +0000 UTC m=+169.889029778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.229848 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.309496 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.311260 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.811212556 +0000 UTC m=+169.994104867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.331612 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b2qz7" podStartSLOduration=144.33157238 podStartE2EDuration="2m24.33157238s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:01.327973085 +0000 UTC m=+169.510865396" watchObservedRunningTime="2025-12-03 09:15:01.33157238 +0000 UTC m=+169.514464691" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.363908 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj"] Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.367560 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.416918 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.418431 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:01.918406299 +0000 UTC m=+170.101298600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.433005 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj"] Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.518417 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:01 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:01 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:01 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.518512 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.531108 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.531643 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rhx\" (UniqueName: \"kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.531723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.531792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.531990 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.03196434 +0000 UTC m=+170.214856641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.583627 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8jm29" podStartSLOduration=144.583600186 podStartE2EDuration="2m24.583600186s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:01.453201563 +0000 UTC m=+169.636093864" watchObservedRunningTime="2025-12-03 09:15:01.583600186 +0000 UTC m=+169.766492487" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.633351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.633415 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.633446 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.633503 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6rhx\" (UniqueName: \"kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.635989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.636779 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.136746831 +0000 UTC m=+170.319639142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.672285 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" podStartSLOduration=144.672248913 podStartE2EDuration="2m24.672248913s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:01.670329673 +0000 UTC m=+169.853221974" watchObservedRunningTime="2025-12-03 09:15:01.672248913 +0000 UTC m=+169.855141214" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.686041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.735700 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.736312 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.737161 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.237131987 +0000 UTC m=+170.420024288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.754776 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" podStartSLOduration=144.754743699 podStartE2EDuration="2m24.754743699s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:01.753294281 +0000 UTC m=+169.936186592" watchObservedRunningTime="2025-12-03 09:15:01.754743699 +0000 UTC m=+169.937635990" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.842678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.843198 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.343181621 +0000 UTC m=+170.526073922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.843885 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9dd0ced-1cd2-4711-b54f-bdce45437d2c-metrics-certs\") pod \"network-metrics-daemon-cqjvn\" (UID: \"f9dd0ced-1cd2-4711-b54f-bdce45437d2c\") " pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.854372 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cqjvn" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.855322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6rhx\" (UniqueName: \"kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx\") pod \"collect-profiles-29412555-f5qgj\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.935568 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:01 crc kubenswrapper[4856]: I1203 09:15:01.945100 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:01 crc kubenswrapper[4856]: E1203 09:15:01.945872 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.445826685 +0000 UTC m=+170.628718986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.097750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.098214 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.598198305 +0000 UTC m=+170.781090606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.195750 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" event={"ID":"2ac1c7a6-9c52-4ade-ae8c-ca01cba906bd","Type":"ContainerStarted","Data":"efe09c5b91bd286ef0bf3814e5e5e6124dda9ef01bfdfdda0b53bd96ffd565dd"} Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.199084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.200188 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.700128301 +0000 UTC m=+170.883020662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.305984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.306889 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.806870913 +0000 UTC m=+170.989763214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.316343 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9wdgf" podStartSLOduration=145.316313681 podStartE2EDuration="2m25.316313681s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:02.104293065 +0000 UTC m=+170.287185366" watchObservedRunningTime="2025-12-03 09:15:02.316313681 +0000 UTC m=+170.499205982" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.330993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" event={"ID":"56d86bd1-ade4-4858-ba65-8dd0edd7cf3d","Type":"ContainerStarted","Data":"e26d1300132a7fe18139988ffa56d00b4d4f7bd4ce73c8caaf9c22cee6e777a6"} Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.410493 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.412475 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:02.912442745 +0000 UTC m=+171.095335046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.470472 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-85pdd" podStartSLOduration=145.470444557 podStartE2EDuration="2m25.470444557s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:02.318285203 +0000 UTC m=+170.501177514" watchObservedRunningTime="2025-12-03 09:15:02.470444557 +0000 UTC m=+170.653336858" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.500307 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:02 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:02 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:02 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.500395 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.522000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.522528 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.022505874 +0000 UTC m=+171.205398175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.534289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" event={"ID":"3c2dac10-bf1a-4906-b51a-efe700b59b90","Type":"ContainerStarted","Data":"c977854316052e0bf72980b0fbdf3cbb50e7b003dfb065e7f9fc23f16ea76f5f"} Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.548629 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lqvvn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.548693 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" podUID="9ae7d610-3fd9-4574-aff6-5c36d33abac2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.548760 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pksqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.548777 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.549132 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.549205 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-29h7h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.549232 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" podUID="0b2fbf18-1b72-4721-9182-3bc74aeed3c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.555671 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-55bjb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.555792 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-55bjb" podUID="dbc404e1-6730-4d2a-b57d-1a2af56390e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.601881 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" podStartSLOduration=145.601834427 podStartE2EDuration="2m25.601834427s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:02.475026248 +0000 UTC m=+170.657918549" watchObservedRunningTime="2025-12-03 09:15:02.601834427 +0000 UTC m=+170.784726728" Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.769142 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.771396 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.271355577 +0000 UTC m=+171.454248068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.887972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:02 crc kubenswrapper[4856]: E1203 09:15:02.888987 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.388969565 +0000 UTC m=+171.571861866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:02 crc kubenswrapper[4856]: I1203 09:15:02.929749 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" podStartSLOduration=145.929716214 podStartE2EDuration="2m25.929716214s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:02.790382166 +0000 UTC m=+170.973274467" watchObservedRunningTime="2025-12-03 09:15:02.929716214 +0000 UTC m=+171.112608515" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.057507 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.058788 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.558754432 +0000 UTC m=+171.741646873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.157430 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rx8vm" podStartSLOduration=146.157407592 podStartE2EDuration="2m26.157407592s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:03.157126664 +0000 UTC m=+171.340018985" watchObservedRunningTime="2025-12-03 09:15:03.157407592 +0000 UTC m=+171.340299893" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.159206 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.159703 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.659686451 +0000 UTC m=+171.842578752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.277505 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.278195 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.778151502 +0000 UTC m=+171.961043803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.352752 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8jsxv" podStartSLOduration=146.352721619 podStartE2EDuration="2m26.352721619s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:03.278256104 +0000 UTC m=+171.461148405" watchObservedRunningTime="2025-12-03 09:15:03.352721619 +0000 UTC m=+171.535613920" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.392226 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.392837 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:03.892797371 +0000 UTC m=+172.075689672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.522600 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" podStartSLOduration=146.522572588 podStartE2EDuration="2m26.522572588s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:03.521739046 +0000 UTC m=+171.704631347" watchObservedRunningTime="2025-12-03 09:15:03.522572588 +0000 UTC m=+171.705464889" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.524112 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-55bjb" podStartSLOduration=147.524101068 podStartE2EDuration="2m27.524101068s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:03.362707071 +0000 UTC m=+171.545599382" watchObservedRunningTime="2025-12-03 09:15:03.524101068 +0000 UTC m=+171.706993369" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.526813 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.527406 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.027390985 +0000 UTC m=+172.210283286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.575804 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:03 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:03 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:03 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.575954 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.629268 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.630016 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.129992308 +0000 UTC m=+172.312884609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.839580 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rmzl5" event={"ID":"a93c4de5-9d7f-404b-b218-57871d7a7dc1","Type":"ContainerStarted","Data":"6cda6c4443c9ac69294173579bb2a83926560e6f8da87e331b458ced63ab1baa"} Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.839709 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:03 crc kubenswrapper[4856]: E1203 09:15:03.840620 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.340584166 +0000 UTC m=+172.523476637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:03 crc kubenswrapper[4856]: I1203 09:15:03.840801 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rmzl5" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.033294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.057788 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.557751548 +0000 UTC m=+172.740643849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.109683 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.110315 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.184781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.185247 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.685225383 +0000 UTC m=+172.868117674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.249492 4856 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-bxcf2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.249651 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" podUID="56d86bd1-ade4-4858-ba65-8dd0edd7cf3d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.289666 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.290322 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.790304472 +0000 UTC m=+172.973196773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.318674 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" event={"ID":"11cb3301-096a-4379-aa98-92390b343969","Type":"ContainerStarted","Data":"1e703c1f3a9ec549555c34c826dfaebee52f05ee806947e633ca10da65a68e5a"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.320209 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.322482 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" event={"ID":"6edb5f9d-0476-4676-b340-9d2eceaa42e3","Type":"ContainerStarted","Data":"d8b96f7119973630e493ca45a3bba6a0daa5060ed407ce945cd99ebdcd34e054"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.323579 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.325116 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" event={"ID":"d6fda50a-7d59-4053-bd6b-0863c51de8e1","Type":"ContainerStarted","Data":"1ad8f095bfc2871961be47c9e471c0f38724cd584ca1f361213370660d241442"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.327518 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" event={"ID":"41569dbf-e024-4bb2-84b8-e56f8d00e389","Type":"ContainerStarted","Data":"fcffafde7f1ef64029a75284d744c4fa4765f5f03ca8ead3d6b8ef9908053acd"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.390746 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" event={"ID":"43e4514a-c7b3-4960-a283-510cbdff66e0","Type":"ContainerStarted","Data":"1c66b00028607d55a051aa4996f5410e6d08ed41d2ec6999ed8c07c1ae518b05"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.399304 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.400433 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:04.900399752 +0000 UTC m=+173.083292053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.400793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" event={"ID":"a2a9769c-77a1-42ca-bd81-446d2ebcd4c6","Type":"ContainerStarted","Data":"98e665dfc259fcf7f328404bb306c062e301ac9aed164a212161f76bc933d1ed"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.418757 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" podUID="412eb7b9-351d-44b2-a427-6c26da5d1e39" containerName="collect-profiles" containerID="cri-o://04058e59502aa9032d2f122c8ec5cfeccdae1290c6ce7451de6f44da5bd55aa1" gracePeriod=30 Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.419015 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" event={"ID":"6d04ab91-7b77-4086-87cc-f02f3e57c9c6","Type":"ContainerStarted","Data":"10b499a5b4c190371945079426e0ace7b5535831d23056509f6e4246cdab1285"} Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.439276 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pksqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.439357 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.514038 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.514572 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.014552129 +0000 UTC m=+173.197444430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.528768 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:04 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:04 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:04 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.528885 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.627255 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lqvvn" Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.637671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.638962 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.138932724 +0000 UTC m=+173.321825025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.754954 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.758146 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.258129693 +0000 UTC m=+173.441021994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:04 crc kubenswrapper[4856]: I1203 09:15:04.907438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:04 crc kubenswrapper[4856]: E1203 09:15:04.909924 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.409899127 +0000 UTC m=+173.592791418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.163072 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:05 crc kubenswrapper[4856]: E1203 09:15:05.163494 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.663471084 +0000 UTC m=+173.846363405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.344126 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:05 crc kubenswrapper[4856]: E1203 09:15:05.344439 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.844411834 +0000 UTC m=+174.027304135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.423329 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-55bjb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.423937 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-55bjb" podUID="dbc404e1-6730-4d2a-b57d-1a2af56390e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.471301 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-29h7h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.471384 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" podUID="0b2fbf18-1b72-4721-9182-3bc74aeed3c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.471425 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:05 crc kubenswrapper[4856]: E1203 09:15:05.471962 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:05.971945932 +0000 UTC m=+174.154838223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.667122 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:05 crc kubenswrapper[4856]: E1203 09:15:05.667759 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.167727551 +0000 UTC m=+174.350619852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.796186 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:05 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:05 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:05 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.796256 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.797358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:05 crc kubenswrapper[4856]: E1203 09:15:05.797934 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.297917329 +0000 UTC m=+174.480809630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.798230 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412540-mbjsr_412eb7b9-351d-44b2-a427-6c26da5d1e39/collect-profiles/0.log" Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.798290 4856 generic.go:334] "Generic (PLEG): container finished" podID="412eb7b9-351d-44b2-a427-6c26da5d1e39" containerID="04058e59502aa9032d2f122c8ec5cfeccdae1290c6ce7451de6f44da5bd55aa1" exitCode=2 Dec 03 09:15:05 crc kubenswrapper[4856]: I1203 09:15:05.798420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" event={"ID":"412eb7b9-351d-44b2-a427-6c26da5d1e39","Type":"ContainerDied","Data":"04058e59502aa9032d2f122c8ec5cfeccdae1290c6ce7451de6f44da5bd55aa1"} Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:05.995166 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:05.995451 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.495429954 +0000 UTC m=+174.678322255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.003350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" event={"ID":"6d04ab91-7b77-4086-87cc-f02f3e57c9c6","Type":"ContainerStarted","Data":"f915f63b8412eb58aee118deea3b65bf54291854b03c5c2ea4b88d6b7ddd884a"} Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.148141 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.148586 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.648570865 +0000 UTC m=+174.831463166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.216342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" event={"ID":"2da94f26-0a24-411f-9e4b-a5f7ac82c15f","Type":"ContainerStarted","Data":"7732861b6e25872750c59e8627190aa1b5344681788347c8ff75e9b651288e32"} Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.235371 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.235441 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.289988 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.291602 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.791576509 +0000 UTC m=+174.974468810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.444903 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.445456 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:06.945441058 +0000 UTC m=+175.128333359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.543676 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:06 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:06 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:06 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.543780 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.546626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.547092 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:07.047071056 +0000 UTC m=+175.229963357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.716427 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.716990 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:07.216969396 +0000 UTC m=+175.399861697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.790391 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj"] Dec 03 09:15:06 crc kubenswrapper[4856]: I1203 09:15:06.825269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:06 crc kubenswrapper[4856]: E1203 09:15:06.826497 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:07.326458831 +0000 UTC m=+175.509351132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.140558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.305255 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.305340 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:07 crc kubenswrapper[4856]: E1203 09:15:07.856086 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.356053458 +0000 UTC m=+176.538945759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.859040 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.859129 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.885330 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:07 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:07 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:07 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.885401 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:07 crc kubenswrapper[4856]: I1203 09:15:07.963904 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:07 crc kubenswrapper[4856]: E1203 09:15:07.964473 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.464444404 +0000 UTC m=+176.647336715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.010390 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.010545 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.011226 4856 patch_prober.go:28] interesting pod/console-f9d7485db-n2k6t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.011275 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.017451 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.017547 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.023197 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" event={"ID":"a13cce6b-a09e-4736-88b2-8212ae48ee93","Type":"ContainerStarted","Data":"4787fe02ad2384836ef320cd98d5df65486b474cd1e19fd581b96bad992736d9"} Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.047128 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cqjvn"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.092544 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.098597 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.598541054 +0000 UTC m=+176.781433355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.103798 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.104035 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.118660 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rmzl5" podStartSLOduration=23.118635622 podStartE2EDuration="23.118635622s" podCreationTimestamp="2025-12-03 09:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.064264234 +0000 UTC m=+176.247156535" watchObservedRunningTime="2025-12-03 09:15:08.118635622 +0000 UTC m=+176.301527923" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.120436 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.123943 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.143774 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.148428 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.189478 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.192068 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-stb5r" podStartSLOduration=151.192031888 podStartE2EDuration="2m31.192031888s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.190767495 +0000 UTC m=+176.373659806" watchObservedRunningTime="2025-12-03 09:15:08.192031888 +0000 UTC m=+176.374924189" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.197335 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.199768 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.699743511 +0000 UTC m=+176.882635812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.246247 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ct5kt" podStartSLOduration=151.246211421 podStartE2EDuration="2m31.246211421s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.246136689 +0000 UTC m=+176.429029010" watchObservedRunningTime="2025-12-03 09:15:08.246211421 +0000 UTC m=+176.429103722" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.309713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.310291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.310336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.310878 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.810856828 +0000 UTC m=+176.993749129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.418779 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.421466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.422892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.423727 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:08.92370333 +0000 UTC m=+177.106595631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.425908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.521031 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:08 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:08 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:08 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.521148 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.543348 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.545130 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.045108877 +0000 UTC m=+177.228001178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.584360 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" podStartSLOduration=151.584316677 podStartE2EDuration="2m31.584316677s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.308973858 +0000 UTC m=+176.491866159" watchObservedRunningTime="2025-12-03 09:15:08.584316677 +0000 UTC m=+176.767208978" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.584882 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" podStartSLOduration=152.584875991 podStartE2EDuration="2m32.584875991s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.572992169 +0000 UTC m=+176.755884470" watchObservedRunningTime="2025-12-03 09:15:08.584875991 +0000 UTC m=+176.767768292" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.612182 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.645287 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.645789 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.14576681 +0000 UTC m=+177.328659111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.747299 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qgmxw" podStartSLOduration=151.747261764 podStartE2EDuration="2m31.747261764s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.744280416 +0000 UTC m=+176.927172717" watchObservedRunningTime="2025-12-03 09:15:08.747261764 +0000 UTC m=+176.930154065" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.752487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.753075 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.253058176 +0000 UTC m=+177.435950477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.757618 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.758851 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.771941 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.783209 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.813196 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podStartSLOduration=152.813167274 podStartE2EDuration="2m32.813167274s" podCreationTimestamp="2025-12-03 09:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.811265494 +0000 UTC m=+176.994157795" watchObservedRunningTime="2025-12-03 09:15:08.813167274 +0000 UTC m=+176.996059575" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.839293 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.863792 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.864133 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.864180 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hnm\" (UniqueName: \"kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.864253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.864419 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.364397979 +0000 UTC m=+177.547290280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.896700 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4ddj5" podStartSLOduration=151.896673286 podStartE2EDuration="2m31.896673286s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:08.867257674 +0000 UTC m=+177.050149975" watchObservedRunningTime="2025-12-03 09:15:08.896673286 +0000 UTC m=+177.079565587" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.950897 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.952487 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.958664 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hnm\" (UniqueName: \"kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.966953 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5r2j\" (UniqueName: \"kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:08 crc kubenswrapper[4856]: E1203 09:15:08.967641 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.467601538 +0000 UTC m=+177.650494039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.968617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.968884 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:08 crc kubenswrapper[4856]: I1203 09:15:08.979094 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.041200 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hnm\" (UniqueName: \"kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm\") pod \"certified-operators-6k2kk\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.071684 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.072244 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.072294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.072340 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5r2j\" (UniqueName: \"kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.073082 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.573050607 +0000 UTC m=+177.755942908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.073638 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.073928 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.100654 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412540-mbjsr_412eb7b9-351d-44b2-a427-6c26da5d1e39/collect-profiles/0.log" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.100732 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.142936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.152941 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.153248 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412eb7b9-351d-44b2-a427-6c26da5d1e39" containerName="collect-profiles" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.153267 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="412eb7b9-351d-44b2-a427-6c26da5d1e39" containerName="collect-profiles" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.153399 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="412eb7b9-351d-44b2-a427-6c26da5d1e39" containerName="collect-profiles" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.157174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5r2j\" (UniqueName: \"kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j\") pod \"community-operators-wzfrz\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.157316 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.177373 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.177397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.177504 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.177710 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.677694284 +0000 UTC m=+177.860586585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.180167 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.198056 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-t7bzl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.198120 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" podUID="6d04ab91-7b77-4086-87cc-f02f3e57c9c6" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.220694 4856 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.278749 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume\") pod \"412eb7b9-351d-44b2-a427-6c26da5d1e39\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.278940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume\") pod \"412eb7b9-351d-44b2-a427-6c26da5d1e39\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.279171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4cwg\" (UniqueName: \"kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg\") pod \"412eb7b9-351d-44b2-a427-6c26da5d1e39\" (UID: \"412eb7b9-351d-44b2-a427-6c26da5d1e39\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.279408 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.279680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.279725 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fk2\" (UniqueName: \"kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.279844 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.280060 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.78003441 +0000 UTC m=+177.962926711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.301135 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume" (OuterVolumeSpecName: "config-volume") pod "412eb7b9-351d-44b2-a427-6c26da5d1e39" (UID: "412eb7b9-351d-44b2-a427-6c26da5d1e39"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.301630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" event={"ID":"2da94f26-0a24-411f-9e4b-a5f7ac82c15f","Type":"ContainerStarted","Data":"fdab0bce600a88476df55e973bcedca21b0e8b043d348f3fb7ca6801e05a8725"} Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.302269 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-55bjb" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.335280 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" event={"ID":"f9dd0ced-1cd2-4711-b54f-bdce45437d2c","Type":"ContainerStarted","Data":"c4d0b36aad76fa6c66974ad6a55d2b1d9816542969b5adfb884eaea7818f2996"} Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.356925 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29412540-mbjsr_412eb7b9-351d-44b2-a427-6c26da5d1e39/collect-profiles/0.log" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.357928 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.358037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr" event={"ID":"412eb7b9-351d-44b2-a427-6c26da5d1e39","Type":"ContainerDied","Data":"59e2109db24d33a808916f9c3f44a9c94518d8801716d58a0365bb937588a554"} Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.358083 4856 scope.go:117] "RemoveContainer" containerID="04058e59502aa9032d2f122c8ec5cfeccdae1290c6ce7451de6f44da5bd55aa1" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.372438 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fk2\" (UniqueName: \"kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384330 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384371 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412eb7b9-351d-44b2-a427-6c26da5d1e39-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.384769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.385067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.385837 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.885791037 +0000 UTC m=+178.068683518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.419104 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "412eb7b9-351d-44b2-a427-6c26da5d1e39" (UID: "412eb7b9-351d-44b2-a427-6c26da5d1e39"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.425712 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.452022 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.453128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.480954 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-29h7h" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.481314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg" (OuterVolumeSpecName: "kube-api-access-n4cwg") pod "412eb7b9-351d-44b2-a427-6c26da5d1e39" (UID: "412eb7b9-351d-44b2-a427-6c26da5d1e39"). InnerVolumeSpecName "kube-api-access-n4cwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.486446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.489888 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:09.989846718 +0000 UTC m=+178.172739019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.502115 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:09 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:09 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:09 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.502228 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.505974 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/412eb7b9-351d-44b2-a427-6c26da5d1e39-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.506022 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4cwg\" (UniqueName: \"kubernetes.io/projected/412eb7b9-351d-44b2-a427-6c26da5d1e39-kube-api-access-n4cwg\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.524209 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fk2\" (UniqueName: \"kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2\") pod \"certified-operators-ql25n\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.544873 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.608599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.608739 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.608764 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.608787 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vtg\" (UniqueName: \"kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.611303 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.111286736 +0000 UTC m=+178.294179037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.714154 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.714488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.714566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.714608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vtg\" (UniqueName: \"kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.715040 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.21502528 +0000 UTC m=+178.397917581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.715645 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.715948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.718158 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.735139 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412540-mbjsr"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.745371 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vtg\" (UniqueName: \"kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg\") pod \"community-operators-zxbl9\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.810443 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.816427 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.816990 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.316973716 +0000 UTC m=+178.499866017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.831847 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.849044 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.918067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.918333 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.418286756 +0000 UTC m=+178.601179057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:09 crc kubenswrapper[4856]: I1203 09:15:09.918608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:09 crc kubenswrapper[4856]: E1203 09:15:09.919148 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.419138698 +0000 UTC m=+178.602030999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.019782 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.020236 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.520186631 +0000 UTC m=+178.703078932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.020355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.020928 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.52091296 +0000 UTC m=+178.703805261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.074617 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.118591 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.121927 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.122417 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.622374383 +0000 UTC m=+178.805266734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.181637 4856 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T09:15:09.220740184Z","Handler":null,"Name":""} Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.224528 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.225084 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.725057829 +0000 UTC m=+178.907950120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.226925 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.282933 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.282968 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mcbr8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.283591 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.283766 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" podUID="11cb3301-096a-4379-aa98-92390b343969" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.326250 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.326548 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.826498732 +0000 UTC m=+179.009391053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.326624 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.327155 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.827141019 +0000 UTC m=+179.010033320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.363230 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" event={"ID":"a13cce6b-a09e-4736-88b2-8212ae48ee93","Type":"ContainerStarted","Data":"929e71ad0ec03fe48012e7eab97f9dc572bef020013a71812a7e93386fa5c10e"} Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.428414 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.428697 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.928661134 +0000 UTC m=+179.111553435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.428865 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: E1203 09:15:10.429545 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 09:15:10.929522367 +0000 UTC m=+179.112414668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7nllr" (UID: "33cee223-4bd1-4769-b794-5607b6610b92") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.478940 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:10 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:10 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:10 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.479034 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.505986 4856 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.506050 4856 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.522083 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rmzl5" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.531562 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.539833 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.633678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.678011 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.678086 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.709010 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="412eb7b9-351d-44b2-a427-6c26da5d1e39" path="/var/lib/kubelet/pods/412eb7b9-351d-44b2-a427-6c26da5d1e39/volumes" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.711175 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.871206 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.886874 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bxcf2" Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.944954 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:15:10 crc kubenswrapper[4856]: I1203 09:15:10.947778 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.528139 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.529395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2mn\" (UniqueName: \"kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.529560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.529588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.565587 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.570296 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:11 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:11 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:11 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.570370 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.576284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerStarted","Data":"9f304adf5f92c441083c274eeba06928f23375303838050a3bfd1084f617930a"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.611320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" event={"ID":"2da94f26-0a24-411f-9e4b-a5f7ac82c15f","Type":"ContainerStarted","Data":"81f31ae94a4965589348284532d0a912bcd558e9a6a5dd0ea5606bb85a4eff41"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.632543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.632608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.632703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td2mn\" (UniqueName: \"kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.633984 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.634412 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.655882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7nllr\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.664380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f60eec4c-0511-4174-ab54-861bd8b1fc31","Type":"ContainerStarted","Data":"382c2bfb522fb89e9d8db3d9943b67d666c9ac526f9ca3b540e898b8062fbab8"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.667820 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerStarted","Data":"f9edd259290633bb55db7d2ed124d435a6deeb2ad5d94700a724a4f43dfc686f"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.689450 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.691866 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.699258 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.707700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td2mn\" (UniqueName: \"kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn\") pod \"redhat-marketplace-llz4m\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.725672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" event={"ID":"f9dd0ced-1cd2-4711-b54f-bdce45437d2c","Type":"ContainerStarted","Data":"a17dc0bcc2e25a9b725fbec315b37ffc0b580cf7514fcc04573dfa05b2a9191f"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.731162 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerStarted","Data":"10388f39444d771f74c1974d77b3d21e554af78546d665352a0cf7a478da957b"} Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.735021 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.737225 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5fgm9" podStartSLOduration=26.737195025 podStartE2EDuration="26.737195025s" podCreationTimestamp="2025-12-03 09:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:11.68406832 +0000 UTC m=+179.866960621" watchObservedRunningTime="2025-12-03 09:15:11.737195025 +0000 UTC m=+179.920087326" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.831436 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" podStartSLOduration=10.831397018 podStartE2EDuration="10.831397018s" podCreationTimestamp="2025-12-03 09:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:11.819088705 +0000 UTC m=+180.001981006" watchObservedRunningTime="2025-12-03 09:15:11.831397018 +0000 UTC m=+180.014289319" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.838662 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mhw4\" (UniqueName: \"kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.838821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.839012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.930796 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.932082 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.936425 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.940973 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.941297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.941475 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mhw4\" (UniqueName: \"kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.942202 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.943309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.961812 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.972808 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mhw4\" (UniqueName: \"kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4\") pod \"redhat-marketplace-sdzg8\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:11 crc kubenswrapper[4856]: I1203 09:15:11.989678 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:15:11 crc kubenswrapper[4856]: W1203 09:15:11.999724 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c8458ae_48c5_41fa_95a1_a22d7b0c250c.slice/crio-df6f0b4962026707e7f9bfaf6bdbda2a1096cee745909019d0c6acc7b79bd4a7 WatchSource:0}: Error finding container df6f0b4962026707e7f9bfaf6bdbda2a1096cee745909019d0c6acc7b79bd4a7: Status 404 returned error can't find the container with id df6f0b4962026707e7f9bfaf6bdbda2a1096cee745909019d0c6acc7b79bd4a7 Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.043107 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248fr\" (UniqueName: \"kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.043209 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.043359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.101510 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.138051 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.140150 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.147554 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248fr\" (UniqueName: \"kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.147630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.147669 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.148927 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.149663 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.183630 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.208491 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248fr\" (UniqueName: \"kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr\") pod \"redhat-operators-q2hp6\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.249344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.249748 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlmjr\" (UniqueName: \"kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.249801 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.361239 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.361297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlmjr\" (UniqueName: \"kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.361332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.367924 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.382360 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.393794 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.394197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlmjr\" (UniqueName: \"kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr\") pod \"redhat-operators-lzxqm\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.437622 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.440978 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.518109 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.537430 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.680483 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:12 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:12 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:12 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.681181 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.844761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cqjvn" event={"ID":"f9dd0ced-1cd2-4711-b54f-bdce45437d2c","Type":"ContainerStarted","Data":"90444644d6a58d1205abb66ef04081fabe25f1ba78722cd5684c2b645d511ba6"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.894798 4856 generic.go:334] "Generic (PLEG): container finished" podID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerID="efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34" exitCode=0 Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.895001 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerDied","Data":"efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.895058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerStarted","Data":"df6f0b4962026707e7f9bfaf6bdbda2a1096cee745909019d0c6acc7b79bd4a7"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.903646 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.916748 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.960501 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerID="7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d" exitCode=0 Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.961115 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerDied","Data":"7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.968584 4856 generic.go:334] "Generic (PLEG): container finished" podID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerID="cdee22dfe87eac92cf59cc4aa061243f2dc116dbfcf4d782c079573c7efe5663" exitCode=0 Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.968672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerDied","Data":"cdee22dfe87eac92cf59cc4aa061243f2dc116dbfcf4d782c079573c7efe5663"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.974763 4856 generic.go:334] "Generic (PLEG): container finished" podID="a13cce6b-a09e-4736-88b2-8212ae48ee93" containerID="929e71ad0ec03fe48012e7eab97f9dc572bef020013a71812a7e93386fa5c10e" exitCode=0 Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.974893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" event={"ID":"a13cce6b-a09e-4736-88b2-8212ae48ee93","Type":"ContainerDied","Data":"929e71ad0ec03fe48012e7eab97f9dc572bef020013a71812a7e93386fa5c10e"} Dec 03 09:15:12 crc kubenswrapper[4856]: I1203 09:15:12.980520 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f60eec4c-0511-4174-ab54-861bd8b1fc31","Type":"ContainerStarted","Data":"cd8ca1a9e364061d07b064d480d934785320dcbf44dae0596b36760f3706ff70"} Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:12.991346 4856 generic.go:334] "Generic (PLEG): container finished" podID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerID="39b6e31982a1e88c77ae8c431ee86d8a59a718aff7dbf88d9040d71ba63bb9ef" exitCode=0 Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:12.993308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerDied","Data":"39b6e31982a1e88c77ae8c431ee86d8a59a718aff7dbf88d9040d71ba63bb9ef"} Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.069449 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cqjvn" podStartSLOduration=156.069403148 podStartE2EDuration="2m36.069403148s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:13.020773092 +0000 UTC m=+181.203665413" watchObservedRunningTime="2025-12-03 09:15:13.069403148 +0000 UTC m=+181.252295479" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.145335 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.146641 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.159708 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.160081 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.173805 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.294463 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mcbr8" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.295240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.295335 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.417602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.417879 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.418224 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.438139 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.446232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.475117 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:13 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:13 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:13 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.475202 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.501477 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.554043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:13 crc kubenswrapper[4856]: I1203 09:15:13.748519 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.011742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" event={"ID":"33cee223-4bd1-4769-b794-5607b6610b92","Type":"ContainerStarted","Data":"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.012543 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" event={"ID":"33cee223-4bd1-4769-b794-5607b6610b92","Type":"ContainerStarted","Data":"d556a85ce439e9a092aed89e4bca768a00da5f6c29a1ab635c6e007c1c58a437"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.034209 4856 generic.go:334] "Generic (PLEG): container finished" podID="f60eec4c-0511-4174-ab54-861bd8b1fc31" containerID="cd8ca1a9e364061d07b064d480d934785320dcbf44dae0596b36760f3706ff70" exitCode=0 Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.034706 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f60eec4c-0511-4174-ab54-861bd8b1fc31","Type":"ContainerDied","Data":"cd8ca1a9e364061d07b064d480d934785320dcbf44dae0596b36760f3706ff70"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.044191 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" podStartSLOduration=157.044144007 podStartE2EDuration="2m37.044144007s" podCreationTimestamp="2025-12-03 09:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:14.042650978 +0000 UTC m=+182.225543279" watchObservedRunningTime="2025-12-03 09:15:14.044144007 +0000 UTC m=+182.227036308" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.046890 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerStarted","Data":"4fd52f5069e5ca0ba1108c2ee5216d3fa877dff899041954198f6be361e41110"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.049397 4856 generic.go:334] "Generic (PLEG): container finished" podID="5fe6decf-3a3f-4150-a617-176207930add" containerID="fa5bc87f05343ddc8d46a44914557b281c1586faa76d65c5fe12e2e1cb1ff64e" exitCode=0 Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.049464 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llz4m" event={"ID":"5fe6decf-3a3f-4150-a617-176207930add","Type":"ContainerDied","Data":"fa5bc87f05343ddc8d46a44914557b281c1586faa76d65c5fe12e2e1cb1ff64e"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.049489 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llz4m" event={"ID":"5fe6decf-3a3f-4150-a617-176207930add","Type":"ContainerStarted","Data":"175b904117ccc25ee0d986c7dfa9dddb27d885a68c9d2a787819d21ef77c41cc"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.060199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdzg8" event={"ID":"0ceab033-9d0f-4e3c-a6ad-1daaef67c864","Type":"ContainerStarted","Data":"f2f0deb5b2ecc082d08f73547bfb1994fa4ecbdb79391ff2d0c20391c2178d6e"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.064520 4856 generic.go:334] "Generic (PLEG): container finished" podID="19e871a1-697e-4373-b40b-81cb18911db0" containerID="bd68b1d849187308b3962cd6b12b8b831b4b7c532de30c751923a09d9b102e5c" exitCode=0 Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.065790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzxqm" event={"ID":"19e871a1-697e-4373-b40b-81cb18911db0","Type":"ContainerDied","Data":"bd68b1d849187308b3962cd6b12b8b831b4b7c532de30c751923a09d9b102e5c"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.065848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzxqm" event={"ID":"19e871a1-697e-4373-b40b-81cb18911db0","Type":"ContainerStarted","Data":"5fc9e2aebadbcb1c7ab4a191a4494ef3c1d557fe40636922b313cca50107b2ca"} Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.212397 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-t7bzl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]log ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]etcd ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 09:15:14 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 09:15:14 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-startinformers ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 09:15:14 crc kubenswrapper[4856]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 09:15:14 crc kubenswrapper[4856]: livez check failed Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.212526 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" podUID="6d04ab91-7b77-4086-87cc-f02f3e57c9c6" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.480262 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:14 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:14 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:14 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.481245 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.561240 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.812226 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.816217 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919260 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir\") pod \"f60eec4c-0511-4174-ab54-861bd8b1fc31\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919332 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume\") pod \"a13cce6b-a09e-4736-88b2-8212ae48ee93\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919387 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f60eec4c-0511-4174-ab54-861bd8b1fc31" (UID: "f60eec4c-0511-4174-ab54-861bd8b1fc31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access\") pod \"f60eec4c-0511-4174-ab54-861bd8b1fc31\" (UID: \"f60eec4c-0511-4174-ab54-861bd8b1fc31\") " Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6rhx\" (UniqueName: \"kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx\") pod \"a13cce6b-a09e-4736-88b2-8212ae48ee93\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.919517 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume\") pod \"a13cce6b-a09e-4736-88b2-8212ae48ee93\" (UID: \"a13cce6b-a09e-4736-88b2-8212ae48ee93\") " Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.920057 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f60eec4c-0511-4174-ab54-861bd8b1fc31-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.920352 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume" (OuterVolumeSpecName: "config-volume") pod "a13cce6b-a09e-4736-88b2-8212ae48ee93" (UID: "a13cce6b-a09e-4736-88b2-8212ae48ee93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.929879 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a13cce6b-a09e-4736-88b2-8212ae48ee93" (UID: "a13cce6b-a09e-4736-88b2-8212ae48ee93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.930433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx" (OuterVolumeSpecName: "kube-api-access-p6rhx") pod "a13cce6b-a09e-4736-88b2-8212ae48ee93" (UID: "a13cce6b-a09e-4736-88b2-8212ae48ee93"). InnerVolumeSpecName "kube-api-access-p6rhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:15:14 crc kubenswrapper[4856]: I1203 09:15:14.930763 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f60eec4c-0511-4174-ab54-861bd8b1fc31" (UID: "f60eec4c-0511-4174-ab54-861bd8b1fc31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.028570 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13cce6b-a09e-4736-88b2-8212ae48ee93-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.032689 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f60eec4c-0511-4174-ab54-861bd8b1fc31-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.032855 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6rhx\" (UniqueName: \"kubernetes.io/projected/a13cce6b-a09e-4736-88b2-8212ae48ee93-kube-api-access-p6rhx\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.032955 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a13cce6b-a09e-4736-88b2-8212ae48ee93-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.101870 4856 generic.go:334] "Generic (PLEG): container finished" podID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerID="871066e55e0996bf52c5e70d612918d64e0ec50b99a0797f3af1e682e6c1408b" exitCode=0 Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.101959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdzg8" event={"ID":"0ceab033-9d0f-4e3c-a6ad-1daaef67c864","Type":"ContainerDied","Data":"871066e55e0996bf52c5e70d612918d64e0ec50b99a0797f3af1e682e6c1408b"} Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.107882 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" event={"ID":"a13cce6b-a09e-4736-88b2-8212ae48ee93","Type":"ContainerDied","Data":"4787fe02ad2384836ef320cd98d5df65486b474cd1e19fd581b96bad992736d9"} Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.107935 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.108008 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4787fe02ad2384836ef320cd98d5df65486b474cd1e19fd581b96bad992736d9" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.121867 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f60eec4c-0511-4174-ab54-861bd8b1fc31","Type":"ContainerDied","Data":"382c2bfb522fb89e9d8db3d9943b67d666c9ac526f9ca3b540e898b8062fbab8"} Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.121929 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382c2bfb522fb89e9d8db3d9943b67d666c9ac526f9ca3b540e898b8062fbab8" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.122050 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.131215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67571e62-5bf0-477a-8136-be1d1234b22b","Type":"ContainerStarted","Data":"4c93832245c004c8844d202423673c156f625e31e7c96bb4fe235e5ba852f419"} Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.145145 4856 generic.go:334] "Generic (PLEG): container finished" podID="08355962-f2c7-470e-982b-24594f675b64" containerID="77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0" exitCode=0 Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.145489 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerDied","Data":"77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0"} Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.146185 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.544553 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:15 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:15 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:15 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:15 crc kubenswrapper[4856]: I1203 09:15:15.545135 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:16 crc kubenswrapper[4856]: I1203 09:15:16.239954 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67571e62-5bf0-477a-8136-be1d1234b22b","Type":"ContainerStarted","Data":"65b585d014baa1cc90503001f9b78b5ba108303aab0f8e2f2e70fa2d3e826ed5"} Dec 03 09:15:16 crc kubenswrapper[4856]: I1203 09:15:16.379599 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.379565606 podStartE2EDuration="3.379565606s" podCreationTimestamp="2025-12-03 09:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:15:16.375318664 +0000 UTC m=+184.558210965" watchObservedRunningTime="2025-12-03 09:15:16.379565606 +0000 UTC m=+184.562457907" Dec 03 09:15:16 crc kubenswrapper[4856]: I1203 09:15:16.520001 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:16 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:16 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:16 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:16 crc kubenswrapper[4856]: I1203 09:15:16.520099 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:17 crc kubenswrapper[4856]: I1203 09:15:17.313248 4856 patch_prober.go:28] interesting pod/console-f9d7485db-n2k6t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 09:15:17 crc kubenswrapper[4856]: I1203 09:15:17.313929 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 09:15:17 crc kubenswrapper[4856]: I1203 09:15:17.472203 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:17 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:17 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:17 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:17 crc kubenswrapper[4856]: I1203 09:15:17.472758 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.009084 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.009222 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.009110 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.009971 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.298202 4856 generic.go:334] "Generic (PLEG): container finished" podID="67571e62-5bf0-477a-8136-be1d1234b22b" containerID="65b585d014baa1cc90503001f9b78b5ba108303aab0f8e2f2e70fa2d3e826ed5" exitCode=0 Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.298300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67571e62-5bf0-477a-8136-be1d1234b22b","Type":"ContainerDied","Data":"65b585d014baa1cc90503001f9b78b5ba108303aab0f8e2f2e70fa2d3e826ed5"} Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.488173 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:18 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:18 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:18 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:18 crc kubenswrapper[4856]: I1203 09:15:18.488248 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:19 crc kubenswrapper[4856]: I1203 09:15:19.184325 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:15:19 crc kubenswrapper[4856]: I1203 09:15:19.194858 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-t7bzl" Dec 03 09:15:19 crc kubenswrapper[4856]: I1203 09:15:19.477312 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:19 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:19 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:19 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:19 crc kubenswrapper[4856]: I1203 09:15:19.477422 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.310612 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.424122 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.425009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67571e62-5bf0-477a-8136-be1d1234b22b","Type":"ContainerDied","Data":"4c93832245c004c8844d202423673c156f625e31e7c96bb4fe235e5ba852f419"} Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.425067 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c93832245c004c8844d202423673c156f625e31e7c96bb4fe235e5ba852f419" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.471881 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:20 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:20 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:20 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.471979 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.505473 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir\") pod \"67571e62-5bf0-477a-8136-be1d1234b22b\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.505743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access\") pod \"67571e62-5bf0-477a-8136-be1d1234b22b\" (UID: \"67571e62-5bf0-477a-8136-be1d1234b22b\") " Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.505795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67571e62-5bf0-477a-8136-be1d1234b22b" (UID: "67571e62-5bf0-477a-8136-be1d1234b22b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.548660 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67571e62-5bf0-477a-8136-be1d1234b22b" (UID: "67571e62-5bf0-477a-8136-be1d1234b22b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.608419 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67571e62-5bf0-477a-8136-be1d1234b22b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:20 crc kubenswrapper[4856]: I1203 09:15:20.608459 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67571e62-5bf0-477a-8136-be1d1234b22b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:15:21 crc kubenswrapper[4856]: I1203 09:15:21.470403 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:21 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:21 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:21 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:21 crc kubenswrapper[4856]: I1203 09:15:21.470473 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:22 crc kubenswrapper[4856]: I1203 09:15:22.473603 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:22 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:22 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:22 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:22 crc kubenswrapper[4856]: I1203 09:15:22.474142 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:22 crc kubenswrapper[4856]: I1203 09:15:22.758904 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:15:22 crc kubenswrapper[4856]: I1203 09:15:22.758977 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:15:23 crc kubenswrapper[4856]: I1203 09:15:23.471505 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:23 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:23 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:23 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:23 crc kubenswrapper[4856]: I1203 09:15:23.471623 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:23 crc kubenswrapper[4856]: I1203 09:15:23.547733 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 09:15:24 crc kubenswrapper[4856]: I1203 09:15:24.470574 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:24 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:24 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:24 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:24 crc kubenswrapper[4856]: I1203 09:15:24.470636 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:25 crc kubenswrapper[4856]: I1203 09:15:25.472347 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:25 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:25 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:25 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:25 crc kubenswrapper[4856]: I1203 09:15:25.472433 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:26 crc kubenswrapper[4856]: I1203 09:15:26.472951 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:26 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:26 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:26 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:26 crc kubenswrapper[4856]: I1203 09:15:26.473076 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.312276 4856 patch_prober.go:28] interesting pod/console-f9d7485db-n2k6t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.312876 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.471345 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:27 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:27 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:27 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.471480 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.954853 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.954899 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.954921 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.954965 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.954981 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.955706 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.955711 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6efdd9261dbede7bb8e7ed28aae96bcc023f3b1c3932ff323aa2e962db910b6c"} pod="openshift-console/downloads-7954f5f757-dgg9g" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.955776 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:27 crc kubenswrapper[4856]: I1203 09:15:27.955837 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" containerID="cri-o://6efdd9261dbede7bb8e7ed28aae96bcc023f3b1c3932ff323aa2e962db910b6c" gracePeriod=2 Dec 03 09:15:28 crc kubenswrapper[4856]: I1203 09:15:28.473281 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:28 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:28 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:28 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:28 crc kubenswrapper[4856]: I1203 09:15:28.473390 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:28 crc kubenswrapper[4856]: I1203 09:15:28.559725 4856 generic.go:334] "Generic (PLEG): container finished" podID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerID="6efdd9261dbede7bb8e7ed28aae96bcc023f3b1c3932ff323aa2e962db910b6c" exitCode=0 Dec 03 09:15:28 crc kubenswrapper[4856]: I1203 09:15:28.559780 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dgg9g" event={"ID":"a7d7cbff-7ff4-4512-b946-61cc310f6959","Type":"ContainerDied","Data":"6efdd9261dbede7bb8e7ed28aae96bcc023f3b1c3932ff323aa2e962db910b6c"} Dec 03 09:15:29 crc kubenswrapper[4856]: I1203 09:15:29.472168 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:29 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:29 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:29 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:29 crc kubenswrapper[4856]: I1203 09:15:29.472536 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:30 crc kubenswrapper[4856]: I1203 09:15:30.471402 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:30 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:30 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:30 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:30 crc kubenswrapper[4856]: I1203 09:15:30.471484 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:31 crc kubenswrapper[4856]: I1203 09:15:31.471645 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:31 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:31 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:31 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:31 crc kubenswrapper[4856]: I1203 09:15:31.471716 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:31 crc kubenswrapper[4856]: I1203 09:15:31.706318 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:15:32 crc kubenswrapper[4856]: I1203 09:15:32.470056 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:32 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:32 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:32 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:32 crc kubenswrapper[4856]: I1203 09:15:32.470140 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:33 crc kubenswrapper[4856]: I1203 09:15:33.471123 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:33 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:33 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:33 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:33 crc kubenswrapper[4856]: I1203 09:15:33.471227 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:34 crc kubenswrapper[4856]: I1203 09:15:34.470459 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:34 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:34 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:34 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:34 crc kubenswrapper[4856]: I1203 09:15:34.470531 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:35 crc kubenswrapper[4856]: I1203 09:15:35.471142 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:35 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:35 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:35 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:35 crc kubenswrapper[4856]: I1203 09:15:35.471204 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:36 crc kubenswrapper[4856]: I1203 09:15:36.471550 4856 patch_prober.go:28] interesting pod/router-default-5444994796-sjdhc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 09:15:36 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Dec 03 09:15:36 crc kubenswrapper[4856]: [+]process-running ok Dec 03 09:15:36 crc kubenswrapper[4856]: healthz check failed Dec 03 09:15:36 crc kubenswrapper[4856]: I1203 09:15:36.471693 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sjdhc" podUID="4fab7782-323c-4f3f-95ed-ea320135d284" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.310994 4856 patch_prober.go:28] interesting pod/console-f9d7485db-n2k6t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.311348 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.470887 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.473605 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sjdhc" Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.955349 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:37 crc kubenswrapper[4856]: I1203 09:15:37.955428 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:39 crc kubenswrapper[4856]: I1203 09:15:39.468253 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b2fpx" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396211 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 09:15:43 crc kubenswrapper[4856]: E1203 09:15:43.396435 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67571e62-5bf0-477a-8136-be1d1234b22b" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396446 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="67571e62-5bf0-477a-8136-be1d1234b22b" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: E1203 09:15:43.396467 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60eec4c-0511-4174-ab54-861bd8b1fc31" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396474 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60eec4c-0511-4174-ab54-861bd8b1fc31" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: E1203 09:15:43.396491 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13cce6b-a09e-4736-88b2-8212ae48ee93" containerName="collect-profiles" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396499 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13cce6b-a09e-4736-88b2-8212ae48ee93" containerName="collect-profiles" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396608 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="67571e62-5bf0-477a-8136-be1d1234b22b" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396622 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13cce6b-a09e-4736-88b2-8212ae48ee93" containerName="collect-profiles" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.396636 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60eec4c-0511-4174-ab54-861bd8b1fc31" containerName="pruner" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.397073 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.399569 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.401351 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.408193 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.550559 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.550653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.652515 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.652737 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.652877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.674996 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:43 crc kubenswrapper[4856]: I1203 09:15:43.712298 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:15:47 crc kubenswrapper[4856]: I1203 09:15:47.315391 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:15:47 crc kubenswrapper[4856]: I1203 09:15:47.319690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:15:47 crc kubenswrapper[4856]: I1203 09:15:47.956368 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:47 crc kubenswrapper[4856]: I1203 09:15:47.956457 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.060563 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.061507 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.073769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.073906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.073941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.083203 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.175709 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.175844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.175950 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.175985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.176080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.203460 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access\") pod \"installer-9-crc\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:49 crc kubenswrapper[4856]: I1203 09:15:49.401766 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:15:52 crc kubenswrapper[4856]: I1203 09:15:52.759680 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:15:52 crc kubenswrapper[4856]: I1203 09:15:52.760837 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:15:52 crc kubenswrapper[4856]: I1203 09:15:52.760922 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:15:52 crc kubenswrapper[4856]: I1203 09:15:52.761685 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:15:52 crc kubenswrapper[4856]: I1203 09:15:52.761757 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360" gracePeriod=600 Dec 03 09:15:53 crc kubenswrapper[4856]: I1203 09:15:53.016982 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360" exitCode=0 Dec 03 09:15:53 crc kubenswrapper[4856]: I1203 09:15:53.017038 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360"} Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.512467 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.513558 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42hnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6k2kk_openshift-marketplace(fb7d3a7f-5b39-41fd-a96b-55118b44cfa5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.515360 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6k2kk" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.523069 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.523325 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2fk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ql25n_openshift-marketplace(7ca6b2f8-d259-4c32-9582-d18786e0762a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:56 crc kubenswrapper[4856]: E1203 09:15:56.541854 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ql25n" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" Dec 03 09:15:57 crc kubenswrapper[4856]: I1203 09:15:57.958246 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:15:57 crc kubenswrapper[4856]: I1203 09:15:57.959086 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.056609 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ql25n" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.057342 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6k2kk" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.136924 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.137123 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5vtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zxbl9_openshift-marketplace(5c8458ae-48c5-41fa-95a1-a22d7b0c250c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.138371 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zxbl9" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.154044 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.154317 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5r2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wzfrz_openshift-marketplace(4f5bdcd1-62a5-4fe9-8968-7979ce92bb72): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:58 crc kubenswrapper[4856]: E1203 09:15:58.155925 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wzfrz" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.237792 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zxbl9" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.238514 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wzfrz" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.313261 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.313776 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mhw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sdzg8_openshift-marketplace(0ceab033-9d0f-4e3c-a6ad-1daaef67c864): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.315048 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sdzg8" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.320297 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.320502 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-llz4m_openshift-marketplace(5fe6decf-3a3f-4150-a617-176207930add): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:15:59 crc kubenswrapper[4856]: E1203 09:15:59.321694 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-llz4m" podUID="5fe6decf-3a3f-4150-a617-176207930add" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.108557 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sdzg8" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.108495 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-llz4m" podUID="5fe6decf-3a3f-4150-a617-176207930add" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.184574 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.185235 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-248fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q2hp6_openshift-marketplace(08355962-f2c7-470e-982b-24594f675b64): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.186475 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q2hp6" podUID="08355962-f2c7-470e-982b-24594f675b64" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.216884 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.217549 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jlmjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lzxqm_openshift-marketplace(19e871a1-697e-4373-b40b-81cb18911db0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:16:03 crc kubenswrapper[4856]: E1203 09:16:03.219205 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lzxqm" podUID="19e871a1-697e-4373-b40b-81cb18911db0" Dec 03 09:16:03 crc kubenswrapper[4856]: I1203 09:16:03.592525 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 09:16:03 crc kubenswrapper[4856]: I1203 09:16:03.671554 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 09:16:03 crc kubenswrapper[4856]: W1203 09:16:03.679930 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2725cb7f_5db1_4f8f_b2dc_213ee27bac8b.slice/crio-9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43 WatchSource:0}: Error finding container 9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43: Status 404 returned error can't find the container with id 9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43 Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.081865 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dgg9g" event={"ID":"a7d7cbff-7ff4-4512-b946-61cc310f6959","Type":"ContainerStarted","Data":"631437140fa8026d80e20f138bd383264da6c2ed6f65094f1397cca5cb80a046"} Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.082064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.082414 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.082463 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.085336 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa"} Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.088055 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21e5cefc-7eaf-45bc-919c-854583e63aba","Type":"ContainerStarted","Data":"0e6f203099001c346947f3ca27a7895e6857f428717968b057f3a3c709030a55"} Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.088088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21e5cefc-7eaf-45bc-919c-854583e63aba","Type":"ContainerStarted","Data":"93f4e27443ff5d15b2c1ffe2f8896c12175b69a1e7e352b72f3f83b141349df1"} Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.090300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b","Type":"ContainerStarted","Data":"8e8e4560e312cf0e4cb32b0a58c21b866c444ecf3fb857d788ee174671612291"} Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.090393 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b","Type":"ContainerStarted","Data":"9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43"} Dec 03 09:16:04 crc kubenswrapper[4856]: E1203 09:16:04.091834 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lzxqm" podUID="19e871a1-697e-4373-b40b-81cb18911db0" Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.135234 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=21.135210736 podStartE2EDuration="21.135210736s" podCreationTimestamp="2025-12-03 09:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:16:04.114091783 +0000 UTC m=+232.296984084" watchObservedRunningTime="2025-12-03 09:16:04.135210736 +0000 UTC m=+232.318103037" Dec 03 09:16:04 crc kubenswrapper[4856]: I1203 09:16:04.152587 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=15.152570405 podStartE2EDuration="15.152570405s" podCreationTimestamp="2025-12-03 09:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:16:04.151077889 +0000 UTC m=+232.333970190" watchObservedRunningTime="2025-12-03 09:16:04.152570405 +0000 UTC m=+232.335462706" Dec 03 09:16:05 crc kubenswrapper[4856]: I1203 09:16:05.097455 4856 generic.go:334] "Generic (PLEG): container finished" podID="2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" containerID="8e8e4560e312cf0e4cb32b0a58c21b866c444ecf3fb857d788ee174671612291" exitCode=0 Dec 03 09:16:05 crc kubenswrapper[4856]: I1203 09:16:05.097652 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b","Type":"ContainerDied","Data":"8e8e4560e312cf0e4cb32b0a58c21b866c444ecf3fb857d788ee174671612291"} Dec 03 09:16:05 crc kubenswrapper[4856]: I1203 09:16:05.099238 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:16:05 crc kubenswrapper[4856]: I1203 09:16:05.099277 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.462387 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.580636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir\") pod \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.580759 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access\") pod \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\" (UID: \"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b\") " Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.580794 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" (UID: "2725cb7f-5db1-4f8f-b2dc-213ee27bac8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.581047 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.587337 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" (UID: "2725cb7f-5db1-4f8f-b2dc-213ee27bac8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:06 crc kubenswrapper[4856]: I1203 09:16:06.681920 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2725cb7f-5db1-4f8f-b2dc-213ee27bac8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.108792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2725cb7f-5db1-4f8f-b2dc-213ee27bac8b","Type":"ContainerDied","Data":"9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43"} Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.108854 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad2fd358d6043ab4b873f118d517efd4e1e3d90decee2ea55237e0faceaef43" Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.108884 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.955563 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.956796 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.956739 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-dgg9g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 03 09:16:07 crc kubenswrapper[4856]: I1203 09:16:07.957071 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dgg9g" podUID="a7d7cbff-7ff4-4512-b946-61cc310f6959" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 03 09:16:16 crc kubenswrapper[4856]: I1203 09:16:16.160748 4856 generic.go:334] "Generic (PLEG): container finished" podID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerID="b61fd060e9ed012bd4c06a8303b25388479390080af2343aee31fa778c5c2ae8" exitCode=0 Dec 03 09:16:16 crc kubenswrapper[4856]: I1203 09:16:16.160958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerDied","Data":"b61fd060e9ed012bd4c06a8303b25388479390080af2343aee31fa778c5c2ae8"} Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.172807 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerStarted","Data":"fe067f98d6e9e8111f3b28d74f4313bc47ffdf9da36ce6b693cdae01b8bdcd51"} Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.175171 4856 generic.go:334] "Generic (PLEG): container finished" podID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerID="9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d" exitCode=0 Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.175279 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerDied","Data":"9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d"} Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.177474 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerID="68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5" exitCode=0 Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.177551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerDied","Data":"68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5"} Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.180119 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerStarted","Data":"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc"} Dec 03 09:16:17 crc kubenswrapper[4856]: I1203 09:16:17.962730 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dgg9g" Dec 03 09:16:18 crc kubenswrapper[4856]: I1203 09:16:18.188852 4856 generic.go:334] "Generic (PLEG): container finished" podID="08355962-f2c7-470e-982b-24594f675b64" containerID="551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc" exitCode=0 Dec 03 09:16:18 crc kubenswrapper[4856]: I1203 09:16:18.188940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerDied","Data":"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc"} Dec 03 09:16:18 crc kubenswrapper[4856]: I1203 09:16:18.192367 4856 generic.go:334] "Generic (PLEG): container finished" podID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerID="fe067f98d6e9e8111f3b28d74f4313bc47ffdf9da36ce6b693cdae01b8bdcd51" exitCode=0 Dec 03 09:16:18 crc kubenswrapper[4856]: I1203 09:16:18.192463 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerDied","Data":"fe067f98d6e9e8111f3b28d74f4313bc47ffdf9da36ce6b693cdae01b8bdcd51"} Dec 03 09:16:25 crc kubenswrapper[4856]: I1203 09:16:25.232324 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerStarted","Data":"fdf2e1c6ee0a72c14823dd92b2976e426e8292ba2fa079a7a0801f4d8d4f008a"} Dec 03 09:16:27 crc kubenswrapper[4856]: I1203 09:16:27.248359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerStarted","Data":"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00"} Dec 03 09:16:27 crc kubenswrapper[4856]: I1203 09:16:27.273621 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6k2kk" podStartSLOduration=7.9888633890000005 podStartE2EDuration="1m19.273591252s" podCreationTimestamp="2025-12-03 09:15:08 +0000 UTC" firstStartedPulling="2025-12-03 09:15:12.971242262 +0000 UTC m=+181.154134563" lastFinishedPulling="2025-12-03 09:16:24.255970125 +0000 UTC m=+252.438862426" observedRunningTime="2025-12-03 09:16:25.253869581 +0000 UTC m=+253.436761902" watchObservedRunningTime="2025-12-03 09:16:27.273591252 +0000 UTC m=+255.456483553" Dec 03 09:16:27 crc kubenswrapper[4856]: I1203 09:16:27.273853 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2hp6" podStartSLOduration=6.398223345 podStartE2EDuration="1m16.273848709s" podCreationTimestamp="2025-12-03 09:15:11 +0000 UTC" firstStartedPulling="2025-12-03 09:15:16.243166995 +0000 UTC m=+184.426059306" lastFinishedPulling="2025-12-03 09:16:26.118792339 +0000 UTC m=+254.301684670" observedRunningTime="2025-12-03 09:16:27.268216238 +0000 UTC m=+255.451108559" watchObservedRunningTime="2025-12-03 09:16:27.273848709 +0000 UTC m=+255.456741010" Dec 03 09:16:29 crc kubenswrapper[4856]: I1203 09:16:29.143593 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:29 crc kubenswrapper[4856]: I1203 09:16:29.144502 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:29 crc kubenswrapper[4856]: I1203 09:16:29.514516 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:29 crc kubenswrapper[4856]: I1203 09:16:29.565860 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:30 crc kubenswrapper[4856]: I1203 09:16:30.268754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerStarted","Data":"24f2bf5820a7ac74ddd8f1c2091f12f92894e993cf277446363ad1f45eb37924"} Dec 03 09:16:32 crc kubenswrapper[4856]: I1203 09:16:32.441323 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:32 crc kubenswrapper[4856]: I1203 09:16:32.441385 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:32 crc kubenswrapper[4856]: I1203 09:16:32.515011 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:32 crc kubenswrapper[4856]: I1203 09:16:32.532420 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wzfrz" podStartSLOduration=8.447840868 podStartE2EDuration="1m24.532398313s" podCreationTimestamp="2025-12-03 09:15:08 +0000 UTC" firstStartedPulling="2025-12-03 09:15:12.996483814 +0000 UTC m=+181.179376115" lastFinishedPulling="2025-12-03 09:16:29.081041239 +0000 UTC m=+257.263933560" observedRunningTime="2025-12-03 09:16:30.308574867 +0000 UTC m=+258.491467178" watchObservedRunningTime="2025-12-03 09:16:32.532398313 +0000 UTC m=+260.715290614" Dec 03 09:16:33 crc kubenswrapper[4856]: I1203 09:16:33.354696 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:34 crc kubenswrapper[4856]: I1203 09:16:34.552900 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2p8h7"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.051291 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.051820 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6k2kk" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="registry-server" containerID="cri-o://fdf2e1c6ee0a72c14823dd92b2976e426e8292ba2fa079a7a0801f4d8d4f008a" gracePeriod=30 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.055926 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.067225 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.067533 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wzfrz" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="registry-server" containerID="cri-o://24f2bf5820a7ac74ddd8f1c2091f12f92894e993cf277446363ad1f45eb37924" gracePeriod=30 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.079929 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.086023 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.086347 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" containerID="cri-o://46866ffa3c281b813bdf5adb21e5b265ac2147d2d8671e71126e122081fa86bf" gracePeriod=30 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.096070 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.107943 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.111116 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nlmw"] Dec 03 09:16:35 crc kubenswrapper[4856]: E1203 09:16:35.111482 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" containerName="pruner" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.111510 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" containerName="pruner" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.111659 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2725cb7f-5db1-4f8f-b2dc-213ee27bac8b" containerName="pruner" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.115623 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.117674 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.133763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nlmw"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.143056 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.157992 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.158085 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.158151 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96vm\" (UniqueName: \"kubernetes.io/projected/87add44f-39e7-460b-9f01-d5aa27e44491-kube-api-access-h96vm\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.259271 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.259728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.259778 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96vm\" (UniqueName: \"kubernetes.io/projected/87add44f-39e7-460b-9f01-d5aa27e44491-kube-api-access-h96vm\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.261087 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.268008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/87add44f-39e7-460b-9f01-d5aa27e44491-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.281650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96vm\" (UniqueName: \"kubernetes.io/projected/87add44f-39e7-460b-9f01-d5aa27e44491-kube-api-access-h96vm\") pod \"marketplace-operator-79b997595-7nlmw\" (UID: \"87add44f-39e7-460b-9f01-d5aa27e44491\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.306817 4856 generic.go:334] "Generic (PLEG): container finished" podID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerID="24f2bf5820a7ac74ddd8f1c2091f12f92894e993cf277446363ad1f45eb37924" exitCode=0 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.306923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerDied","Data":"24f2bf5820a7ac74ddd8f1c2091f12f92894e993cf277446363ad1f45eb37924"} Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.310695 4856 generic.go:334] "Generic (PLEG): container finished" podID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerID="fdf2e1c6ee0a72c14823dd92b2976e426e8292ba2fa079a7a0801f4d8d4f008a" exitCode=0 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.310759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerDied","Data":"fdf2e1c6ee0a72c14823dd92b2976e426e8292ba2fa079a7a0801f4d8d4f008a"} Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.313885 4856 generic.go:334] "Generic (PLEG): container finished" podID="19d74018-0d57-4f93-a298-64b08e3df414" containerID="46866ffa3c281b813bdf5adb21e5b265ac2147d2d8671e71126e122081fa86bf" exitCode=0 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.314448 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2hp6" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="registry-server" containerID="cri-o://3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00" gracePeriod=30 Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.314682 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" event={"ID":"19d74018-0d57-4f93-a298-64b08e3df414","Type":"ContainerDied","Data":"46866ffa3c281b813bdf5adb21e5b265ac2147d2d8671e71126e122081fa86bf"} Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.508537 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.524674 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.532086 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.556346 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.579822 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hnm\" (UniqueName: \"kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm\") pod \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.579919 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5r2j\" (UniqueName: \"kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j\") pod \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.580022 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content\") pod \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.580096 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities\") pod \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\" (UID: \"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.580140 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities\") pod \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.580169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content\") pod \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\" (UID: \"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.583215 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities" (OuterVolumeSpecName: "utilities") pod "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" (UID: "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.588720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j" (OuterVolumeSpecName: "kube-api-access-z5r2j") pod "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" (UID: "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72"). InnerVolumeSpecName "kube-api-access-z5r2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.593644 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm" (OuterVolumeSpecName: "kube-api-access-42hnm") pod "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" (UID: "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5"). InnerVolumeSpecName "kube-api-access-42hnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.601276 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities" (OuterVolumeSpecName: "utilities") pod "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" (UID: "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.681570 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics\") pod \"19d74018-0d57-4f93-a298-64b08e3df414\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.681656 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8dv8\" (UniqueName: \"kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8\") pod \"19d74018-0d57-4f93-a298-64b08e3df414\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.681676 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca\") pod \"19d74018-0d57-4f93-a298-64b08e3df414\" (UID: \"19d74018-0d57-4f93-a298-64b08e3df414\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.682512 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.682559 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.682579 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hnm\" (UniqueName: \"kubernetes.io/projected/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-kube-api-access-42hnm\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.682602 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5r2j\" (UniqueName: \"kubernetes.io/projected/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-kube-api-access-z5r2j\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.683224 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "19d74018-0d57-4f93-a298-64b08e3df414" (UID: "19d74018-0d57-4f93-a298-64b08e3df414"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.696759 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" (UID: "fb7d3a7f-5b39-41fd-a96b-55118b44cfa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.700402 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8" (OuterVolumeSpecName: "kube-api-access-v8dv8") pod "19d74018-0d57-4f93-a298-64b08e3df414" (UID: "19d74018-0d57-4f93-a298-64b08e3df414"). InnerVolumeSpecName "kube-api-access-v8dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.711924 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "19d74018-0d57-4f93-a298-64b08e3df414" (UID: "19d74018-0d57-4f93-a298-64b08e3df414"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.726081 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" (UID: "4f5bdcd1-62a5-4fe9-8968-7979ce92bb72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.785309 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.785530 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.785576 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8dv8\" (UniqueName: \"kubernetes.io/projected/19d74018-0d57-4f93-a298-64b08e3df414-kube-api-access-v8dv8\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.785799 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19d74018-0d57-4f93-a298-64b08e3df414-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.785862 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.889733 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.987841 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content\") pod \"08355962-f2c7-470e-982b-24594f675b64\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.988494 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities\") pod \"08355962-f2c7-470e-982b-24594f675b64\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.988530 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-248fr\" (UniqueName: \"kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr\") pod \"08355962-f2c7-470e-982b-24594f675b64\" (UID: \"08355962-f2c7-470e-982b-24594f675b64\") " Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.989192 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities" (OuterVolumeSpecName: "utilities") pod "08355962-f2c7-470e-982b-24594f675b64" (UID: "08355962-f2c7-470e-982b-24594f675b64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:35 crc kubenswrapper[4856]: I1203 09:16:35.992310 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr" (OuterVolumeSpecName: "kube-api-access-248fr") pod "08355962-f2c7-470e-982b-24594f675b64" (UID: "08355962-f2c7-470e-982b-24594f675b64"). InnerVolumeSpecName "kube-api-access-248fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.092734 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.093060 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-248fr\" (UniqueName: \"kubernetes.io/projected/08355962-f2c7-470e-982b-24594f675b64-kube-api-access-248fr\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.095793 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nlmw"] Dec 03 09:16:36 crc kubenswrapper[4856]: W1203 09:16:36.110327 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87add44f_39e7_460b_9f01_d5aa27e44491.slice/crio-f27caadd601ef9817fae71f322e1ded4ed55713d92263620c1b64698eee4dcae WatchSource:0}: Error finding container f27caadd601ef9817fae71f322e1ded4ed55713d92263620c1b64698eee4dcae: Status 404 returned error can't find the container with id f27caadd601ef9817fae71f322e1ded4ed55713d92263620c1b64698eee4dcae Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.127522 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08355962-f2c7-470e-982b-24594f675b64" (UID: "08355962-f2c7-470e-982b-24594f675b64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.194518 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08355962-f2c7-470e-982b-24594f675b64-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.324204 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzfrz" event={"ID":"4f5bdcd1-62a5-4fe9-8968-7979ce92bb72","Type":"ContainerDied","Data":"f9edd259290633bb55db7d2ed124d435a6deeb2ad5d94700a724a4f43dfc686f"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.324249 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzfrz" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.324294 4856 scope.go:117] "RemoveContainer" containerID="24f2bf5820a7ac74ddd8f1c2091f12f92894e993cf277446363ad1f45eb37924" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.327463 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerStarted","Data":"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.327854 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ql25n" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="registry-server" containerID="cri-o://26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd" gracePeriod=30 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.333125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k2kk" event={"ID":"fb7d3a7f-5b39-41fd-a96b-55118b44cfa5","Type":"ContainerDied","Data":"9f304adf5f92c441083c274eeba06928f23375303838050a3bfd1084f617930a"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.333496 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k2kk" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.339859 4856 generic.go:334] "Generic (PLEG): container finished" podID="08355962-f2c7-470e-982b-24594f675b64" containerID="3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00" exitCode=0 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.339944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerDied","Data":"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.339975 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hp6" event={"ID":"08355962-f2c7-470e-982b-24594f675b64","Type":"ContainerDied","Data":"4fd52f5069e5ca0ba1108c2ee5216d3fa877dff899041954198f6be361e41110"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.340066 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hp6" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.356442 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ql25n" podStartSLOduration=5.401269027 podStartE2EDuration="1m27.356407495s" podCreationTimestamp="2025-12-03 09:15:09 +0000 UTC" firstStartedPulling="2025-12-03 09:15:12.964029372 +0000 UTC m=+181.146921673" lastFinishedPulling="2025-12-03 09:16:34.91916784 +0000 UTC m=+263.102060141" observedRunningTime="2025-12-03 09:16:36.354828477 +0000 UTC m=+264.537720798" watchObservedRunningTime="2025-12-03 09:16:36.356407495 +0000 UTC m=+264.539299796" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.361614 4856 scope.go:117] "RemoveContainer" containerID="fe067f98d6e9e8111f3b28d74f4313bc47ffdf9da36ce6b693cdae01b8bdcd51" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.363437 4856 generic.go:334] "Generic (PLEG): container finished" podID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerID="821f67276fc7fa5e2b018980af98a83ea2b2fc7fa97bef413a059f1e463a719f" exitCode=0 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.363578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdzg8" event={"ID":"0ceab033-9d0f-4e3c-a6ad-1daaef67c864","Type":"ContainerDied","Data":"821f67276fc7fa5e2b018980af98a83ea2b2fc7fa97bef413a059f1e463a719f"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.389063 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.389884 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pksqg" event={"ID":"19d74018-0d57-4f93-a298-64b08e3df414","Type":"ContainerDied","Data":"04a98d7d5ee7e83f280e7eda7611da32680e7ec6218e6d1962bfbf975eb055fa"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.396975 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" event={"ID":"87add44f-39e7-460b-9f01-d5aa27e44491","Type":"ContainerStarted","Data":"0fd2c16f55d032e3f40bd477e3a7d7e66ae25685b0d9dae6eab6c2ac7fe9bad7"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.397036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" event={"ID":"87add44f-39e7-460b-9f01-d5aa27e44491","Type":"ContainerStarted","Data":"f27caadd601ef9817fae71f322e1ded4ed55713d92263620c1b64698eee4dcae"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.401463 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7nlmw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.400252 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.401824 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" podUID="87add44f-39e7-460b-9f01-d5aa27e44491" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.409935 4856 generic.go:334] "Generic (PLEG): container finished" podID="19e871a1-697e-4373-b40b-81cb18911db0" containerID="13b7096768c0cdb76034c2bed18abcd0d8417b7c202cee71b2092897803457ab" exitCode=0 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.410042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzxqm" event={"ID":"19e871a1-697e-4373-b40b-81cb18911db0","Type":"ContainerDied","Data":"13b7096768c0cdb76034c2bed18abcd0d8417b7c202cee71b2092897803457ab"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.417596 4856 scope.go:117] "RemoveContainer" containerID="39b6e31982a1e88c77ae8c431ee86d8a59a718aff7dbf88d9040d71ba63bb9ef" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.422693 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.423588 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxbl9" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="registry-server" containerID="cri-o://21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581" gracePeriod=30 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.423956 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerStarted","Data":"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.429067 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6k2kk"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.439499 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.450376 4856 generic.go:334] "Generic (PLEG): container finished" podID="5fe6decf-3a3f-4150-a617-176207930add" containerID="94f7fa3fbac4d69ca39968e96ee02053ef84ad94412be7897405515f40ebba6e" exitCode=0 Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.450427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llz4m" event={"ID":"5fe6decf-3a3f-4150-a617-176207930add","Type":"ContainerDied","Data":"94f7fa3fbac4d69ca39968e96ee02053ef84ad94412be7897405515f40ebba6e"} Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.454232 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wzfrz"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.498012 4856 scope.go:117] "RemoveContainer" containerID="fdf2e1c6ee0a72c14823dd92b2976e426e8292ba2fa079a7a0801f4d8d4f008a" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.546984 4856 scope.go:117] "RemoveContainer" containerID="b61fd060e9ed012bd4c06a8303b25388479390080af2343aee31fa778c5c2ae8" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.561194 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" podStartSLOduration=1.5611476290000001 podStartE2EDuration="1.561147629s" podCreationTimestamp="2025-12-03 09:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:16:36.559159879 +0000 UTC m=+264.742052180" watchObservedRunningTime="2025-12-03 09:16:36.561147629 +0000 UTC m=+264.744039930" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.598730 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxbl9" podStartSLOduration=5.580801414 podStartE2EDuration="1m27.598703973s" podCreationTimestamp="2025-12-03 09:15:09 +0000 UTC" firstStartedPulling="2025-12-03 09:15:12.9163195 +0000 UTC m=+181.099211801" lastFinishedPulling="2025-12-03 09:16:34.934222059 +0000 UTC m=+263.117114360" observedRunningTime="2025-12-03 09:16:36.598019982 +0000 UTC m=+264.780912283" watchObservedRunningTime="2025-12-03 09:16:36.598703973 +0000 UTC m=+264.781596274" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.618655 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.624617 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2hp6"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.630534 4856 scope.go:117] "RemoveContainer" containerID="cdee22dfe87eac92cf59cc4aa061243f2dc116dbfcf4d782c079573c7efe5663" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.647721 4856 scope.go:117] "RemoveContainer" containerID="3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.658444 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.670421 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pksqg"] Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.694321 4856 scope.go:117] "RemoveContainer" containerID="551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.698282 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08355962-f2c7-470e-982b-24594f675b64" path="/var/lib/kubelet/pods/08355962-f2c7-470e-982b-24594f675b64/volumes" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.698910 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d74018-0d57-4f93-a298-64b08e3df414" path="/var/lib/kubelet/pods/19d74018-0d57-4f93-a298-64b08e3df414/volumes" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.699341 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" path="/var/lib/kubelet/pods/4f5bdcd1-62a5-4fe9-8968-7979ce92bb72/volumes" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.700337 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" path="/var/lib/kubelet/pods/fb7d3a7f-5b39-41fd-a96b-55118b44cfa5/volumes" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.739322 4856 scope.go:117] "RemoveContainer" containerID="77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.776400 4856 scope.go:117] "RemoveContainer" containerID="3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00" Dec 03 09:16:36 crc kubenswrapper[4856]: E1203 09:16:36.777523 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00\": container with ID starting with 3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00 not found: ID does not exist" containerID="3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.777555 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00"} err="failed to get container status \"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00\": rpc error: code = NotFound desc = could not find container \"3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00\": container with ID starting with 3c37127ddc77ba04163fd029ae9da5e22ec7825320fd62c5ea80b793e805be00 not found: ID does not exist" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.777580 4856 scope.go:117] "RemoveContainer" containerID="551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc" Dec 03 09:16:36 crc kubenswrapper[4856]: E1203 09:16:36.777883 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc\": container with ID starting with 551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc not found: ID does not exist" containerID="551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.777905 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc"} err="failed to get container status \"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc\": rpc error: code = NotFound desc = could not find container \"551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc\": container with ID starting with 551197b37864cb92249b2fbeee087154fd196baea5b58ab2f8698850d75adfdc not found: ID does not exist" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.777919 4856 scope.go:117] "RemoveContainer" containerID="77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0" Dec 03 09:16:36 crc kubenswrapper[4856]: E1203 09:16:36.778159 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0\": container with ID starting with 77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0 not found: ID does not exist" containerID="77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.778180 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0"} err="failed to get container status \"77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0\": rpc error: code = NotFound desc = could not find container \"77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0\": container with ID starting with 77c1bdcf143b1ffd261daa9f30db2137e2180405ed59d1d00e82a26a1e3166c0 not found: ID does not exist" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.778196 4856 scope.go:117] "RemoveContainer" containerID="46866ffa3c281b813bdf5adb21e5b265ac2147d2d8671e71126e122081fa86bf" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.779556 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.902974 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mhw4\" (UniqueName: \"kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4\") pod \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.903130 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content\") pod \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.903190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities\") pod \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\" (UID: \"0ceab033-9d0f-4e3c-a6ad-1daaef67c864\") " Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.907743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities" (OuterVolumeSpecName: "utilities") pod "0ceab033-9d0f-4e3c-a6ad-1daaef67c864" (UID: "0ceab033-9d0f-4e3c-a6ad-1daaef67c864"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.911124 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4" (OuterVolumeSpecName: "kube-api-access-5mhw4") pod "0ceab033-9d0f-4e3c-a6ad-1daaef67c864" (UID: "0ceab033-9d0f-4e3c-a6ad-1daaef67c864"). InnerVolumeSpecName "kube-api-access-5mhw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.933071 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ceab033-9d0f-4e3c-a6ad-1daaef67c864" (UID: "0ceab033-9d0f-4e3c-a6ad-1daaef67c864"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:36 crc kubenswrapper[4856]: I1203 09:16:36.969952 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlmjr\" (UniqueName: \"kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr\") pod \"19e871a1-697e-4373-b40b-81cb18911db0\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005233 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities\") pod \"19e871a1-697e-4373-b40b-81cb18911db0\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005317 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content\") pod \"19e871a1-697e-4373-b40b-81cb18911db0\" (UID: \"19e871a1-697e-4373-b40b-81cb18911db0\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005634 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005656 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mhw4\" (UniqueName: \"kubernetes.io/projected/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-kube-api-access-5mhw4\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.005670 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ceab033-9d0f-4e3c-a6ad-1daaef67c864-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.006523 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities" (OuterVolumeSpecName: "utilities") pod "19e871a1-697e-4373-b40b-81cb18911db0" (UID: "19e871a1-697e-4373-b40b-81cb18911db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.011134 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr" (OuterVolumeSpecName: "kube-api-access-jlmjr") pod "19e871a1-697e-4373-b40b-81cb18911db0" (UID: "19e871a1-697e-4373-b40b-81cb18911db0"). InnerVolumeSpecName "kube-api-access-jlmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.062913 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.072058 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ql25n_7ca6b2f8-d259-4c32-9582-d18786e0762a/registry-server/0.log" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.073285 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.081374 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zxbl9_5c8458ae-48c5-41fa-95a1-a22d7b0c250c/registry-server/0.log" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.084767 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.108986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities\") pod \"5fe6decf-3a3f-4150-a617-176207930add\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.109174 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td2mn\" (UniqueName: \"kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn\") pod \"5fe6decf-3a3f-4150-a617-176207930add\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.109228 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fk2\" (UniqueName: \"kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2\") pod \"7ca6b2f8-d259-4c32-9582-d18786e0762a\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.109263 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content\") pod \"5fe6decf-3a3f-4150-a617-176207930add\" (UID: \"5fe6decf-3a3f-4150-a617-176207930add\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.109299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities\") pod \"7ca6b2f8-d259-4c32-9582-d18786e0762a\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.109367 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content\") pod \"7ca6b2f8-d259-4c32-9582-d18786e0762a\" (UID: \"7ca6b2f8-d259-4c32-9582-d18786e0762a\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.110145 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities" (OuterVolumeSpecName: "utilities") pod "5fe6decf-3a3f-4150-a617-176207930add" (UID: "5fe6decf-3a3f-4150-a617-176207930add"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.112751 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2" (OuterVolumeSpecName: "kube-api-access-c2fk2") pod "7ca6b2f8-d259-4c32-9582-d18786e0762a" (UID: "7ca6b2f8-d259-4c32-9582-d18786e0762a"). InnerVolumeSpecName "kube-api-access-c2fk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.113428 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities" (OuterVolumeSpecName: "utilities") pod "7ca6b2f8-d259-4c32-9582-d18786e0762a" (UID: "7ca6b2f8-d259-4c32-9582-d18786e0762a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.123225 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn" (OuterVolumeSpecName: "kube-api-access-td2mn") pod "5fe6decf-3a3f-4150-a617-176207930add" (UID: "5fe6decf-3a3f-4150-a617-176207930add"). InnerVolumeSpecName "kube-api-access-td2mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141866 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141904 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlmjr\" (UniqueName: \"kubernetes.io/projected/19e871a1-697e-4373-b40b-81cb18911db0-kube-api-access-jlmjr\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141918 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td2mn\" (UniqueName: \"kubernetes.io/projected/5fe6decf-3a3f-4150-a617-176207930add-kube-api-access-td2mn\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141931 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2fk2\" (UniqueName: \"kubernetes.io/projected/7ca6b2f8-d259-4c32-9582-d18786e0762a-kube-api-access-c2fk2\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141941 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.141952 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.148820 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19e871a1-697e-4373-b40b-81cb18911db0" (UID: "19e871a1-697e-4373-b40b-81cb18911db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.162083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fe6decf-3a3f-4150-a617-176207930add" (UID: "5fe6decf-3a3f-4150-a617-176207930add"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.186490 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ca6b2f8-d259-4c32-9582-d18786e0762a" (UID: "7ca6b2f8-d259-4c32-9582-d18786e0762a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242447 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content\") pod \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242533 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vtg\" (UniqueName: \"kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg\") pod \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242575 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities\") pod \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\" (UID: \"5c8458ae-48c5-41fa-95a1-a22d7b0c250c\") " Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242832 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ca6b2f8-d259-4c32-9582-d18786e0762a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242845 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe6decf-3a3f-4150-a617-176207930add-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.242854 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e871a1-697e-4373-b40b-81cb18911db0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.243742 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities" (OuterVolumeSpecName: "utilities") pod "5c8458ae-48c5-41fa-95a1-a22d7b0c250c" (UID: "5c8458ae-48c5-41fa-95a1-a22d7b0c250c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.247980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg" (OuterVolumeSpecName: "kube-api-access-s5vtg") pod "5c8458ae-48c5-41fa-95a1-a22d7b0c250c" (UID: "5c8458ae-48c5-41fa-95a1-a22d7b0c250c"). InnerVolumeSpecName "kube-api-access-s5vtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.293129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c8458ae-48c5-41fa-95a1-a22d7b0c250c" (UID: "5c8458ae-48c5-41fa-95a1-a22d7b0c250c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.343686 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vtg\" (UniqueName: \"kubernetes.io/projected/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-kube-api-access-s5vtg\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.343721 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.343744 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c8458ae-48c5-41fa-95a1-a22d7b0c250c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.473581 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zxbl9_5c8458ae-48c5-41fa-95a1-a22d7b0c250c/registry-server/0.log" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.474492 4856 generic.go:334] "Generic (PLEG): container finished" podID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerID="21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581" exitCode=1 Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.474542 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerDied","Data":"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.474610 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxbl9" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.474633 4856 scope.go:117] "RemoveContainer" containerID="21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.474617 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxbl9" event={"ID":"5c8458ae-48c5-41fa-95a1-a22d7b0c250c","Type":"ContainerDied","Data":"df6f0b4962026707e7f9bfaf6bdbda2a1096cee745909019d0c6acc7b79bd4a7"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.476399 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ql25n_7ca6b2f8-d259-4c32-9582-d18786e0762a/registry-server/0.log" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.477497 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerID="26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd" exitCode=1 Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.477622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerDied","Data":"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.477703 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ql25n" event={"ID":"7ca6b2f8-d259-4c32-9582-d18786e0762a","Type":"ContainerDied","Data":"10388f39444d771f74c1974d77b3d21e554af78546d665352a0cf7a478da957b"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.477665 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ql25n" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.483546 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llz4m" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.483555 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llz4m" event={"ID":"5fe6decf-3a3f-4150-a617-176207930add","Type":"ContainerDied","Data":"175b904117ccc25ee0d986c7dfa9dddb27d885a68c9d2a787819d21ef77c41cc"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.488957 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzxqm" event={"ID":"19e871a1-697e-4373-b40b-81cb18911db0","Type":"ContainerDied","Data":"5fc9e2aebadbcb1c7ab4a191a4494ef3c1d557fe40636922b313cca50107b2ca"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.489096 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzxqm" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.494598 4856 scope.go:117] "RemoveContainer" containerID="9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.497159 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdzg8" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.497224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdzg8" event={"ID":"0ceab033-9d0f-4e3c-a6ad-1daaef67c864","Type":"ContainerDied","Data":"f2f0deb5b2ecc082d08f73547bfb1994fa4ecbdb79391ff2d0c20391c2178d6e"} Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.500036 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7nlmw" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.524928 4856 scope.go:117] "RemoveContainer" containerID="efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.564337 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.566847 4856 scope.go:117] "RemoveContainer" containerID="21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581" Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.570664 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581\": container with ID starting with 21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581 not found: ID does not exist" containerID="21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.570753 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581"} err="failed to get container status \"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581\": rpc error: code = NotFound desc = could not find container \"21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581\": container with ID starting with 21c0982c8f31b3f21b440fa721122eabcac943044f6cbc40a8916285ad5b7581 not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.571002 4856 scope.go:117] "RemoveContainer" containerID="9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d" Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.571427 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d\": container with ID starting with 9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d not found: ID does not exist" containerID="9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.571947 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d"} err="failed to get container status \"9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d\": rpc error: code = NotFound desc = could not find container \"9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d\": container with ID starting with 9ac4c0de1ae79b9dc36fa5b5c92b944553a64f8d0b27fa1b5986b60e026a311d not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.572496 4856 scope.go:117] "RemoveContainer" containerID="efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.573973 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxbl9"] Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.574716 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34\": container with ID starting with efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34 not found: ID does not exist" containerID="efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.574754 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34"} err="failed to get container status \"efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34\": rpc error: code = NotFound desc = could not find container \"efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34\": container with ID starting with efcd02cbaba0cfe87633b3d50fbf7a89a3663542b7fc16e886f43fd976abcc34 not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.574777 4856 scope.go:117] "RemoveContainer" containerID="26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.577841 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.579986 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ql25n"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.602285 4856 scope.go:117] "RemoveContainer" containerID="68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.610688 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.617279 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdzg8"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.618634 4856 scope.go:117] "RemoveContainer" containerID="7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.635947 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.639650 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-llz4m"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.642926 4856 scope.go:117] "RemoveContainer" containerID="26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd" Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.644131 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd\": container with ID starting with 26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd not found: ID does not exist" containerID="26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.644323 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd"} err="failed to get container status \"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd\": rpc error: code = NotFound desc = could not find container \"26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd\": container with ID starting with 26c41c2df63106d1b3d4d6b45d9ff75dd82fe8da439a8681d6abddfd1d8983bd not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.644393 4856 scope.go:117] "RemoveContainer" containerID="68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5" Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.645254 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5\": container with ID starting with 68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5 not found: ID does not exist" containerID="68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.645319 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5"} err="failed to get container status \"68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5\": rpc error: code = NotFound desc = could not find container \"68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5\": container with ID starting with 68685931e13674bd3314a96bdd6baf8e7a592b89cb2f9d2eb4b799b9d71dd7e5 not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.645362 4856 scope.go:117] "RemoveContainer" containerID="7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d" Dec 03 09:16:37 crc kubenswrapper[4856]: E1203 09:16:37.646410 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d\": container with ID starting with 7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d not found: ID does not exist" containerID="7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.646520 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d"} err="failed to get container status \"7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d\": rpc error: code = NotFound desc = could not find container \"7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d\": container with ID starting with 7ec5f51dc02d4038a37da56e0ade08619a5e9c945ad759e828dba6d549c6909d not found: ID does not exist" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.646615 4856 scope.go:117] "RemoveContainer" containerID="94f7fa3fbac4d69ca39968e96ee02053ef84ad94412be7897405515f40ebba6e" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.666082 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.669797 4856 scope.go:117] "RemoveContainer" containerID="fa5bc87f05343ddc8d46a44914557b281c1586faa76d65c5fe12e2e1cb1ff64e" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.670288 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzxqm"] Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.685837 4856 scope.go:117] "RemoveContainer" containerID="13b7096768c0cdb76034c2bed18abcd0d8417b7c202cee71b2092897803457ab" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.708632 4856 scope.go:117] "RemoveContainer" containerID="bd68b1d849187308b3962cd6b12b8b831b4b7c532de30c751923a09d9b102e5c" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.730822 4856 scope.go:117] "RemoveContainer" containerID="821f67276fc7fa5e2b018980af98a83ea2b2fc7fa97bef413a059f1e463a719f" Dec 03 09:16:37 crc kubenswrapper[4856]: I1203 09:16:37.747270 4856 scope.go:117] "RemoveContainer" containerID="871066e55e0996bf52c5e70d612918d64e0ec50b99a0797f3af1e682e6c1408b" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.267541 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dlb"] Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268340 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e871a1-697e-4373-b40b-81cb18911db0" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268357 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e871a1-697e-4373-b40b-81cb18911db0" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268373 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268380 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268392 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe6decf-3a3f-4150-a617-176207930add" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268399 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe6decf-3a3f-4150-a617-176207930add" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268409 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268416 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268425 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268432 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268442 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268449 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268458 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268467 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268477 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268483 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268494 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268500 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268511 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268518 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268530 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268573 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268581 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268588 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268598 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268605 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268613 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268619 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268630 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268637 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268647 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268655 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268664 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268672 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268681 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe6decf-3a3f-4150-a617-176207930add" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268689 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe6decf-3a3f-4150-a617-176207930add" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268696 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268703 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268713 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e871a1-697e-4373-b40b-81cb18911db0" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268721 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e871a1-697e-4373-b40b-81cb18911db0" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268730 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268736 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: E1203 09:16:38.268746 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268753 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="extract-utilities" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268889 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268901 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe6decf-3a3f-4150-a617-176207930add" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268913 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268922 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5bdcd1-62a5-4fe9-8968-7979ce92bb72" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268932 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7d3a7f-5b39-41fd-a96b-55118b44cfa5" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268941 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="08355962-f2c7-470e-982b-24594f675b64" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268953 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" containerName="registry-server" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268962 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e871a1-697e-4373-b40b-81cb18911db0" containerName="extract-content" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.268974 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d74018-0d57-4f93-a298-64b08e3df414" containerName="marketplace-operator" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.269973 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.273278 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.308164 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dlb"] Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.359037 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-catalog-content\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.359677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-utilities\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.359887 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrgl\" (UniqueName: \"kubernetes.io/projected/0d5cdf16-9723-4454-9d50-01be3e7d70cc-kube-api-access-nxrgl\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.461189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-utilities\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.461285 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrgl\" (UniqueName: \"kubernetes.io/projected/0d5cdf16-9723-4454-9d50-01be3e7d70cc-kube-api-access-nxrgl\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.461380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-catalog-content\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.461766 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-utilities\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.461869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d5cdf16-9723-4454-9d50-01be3e7d70cc-catalog-content\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.464352 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvlfn"] Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.465591 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.468480 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.479387 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvlfn"] Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.481503 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrgl\" (UniqueName: \"kubernetes.io/projected/0d5cdf16-9723-4454-9d50-01be3e7d70cc-kube-api-access-nxrgl\") pod \"redhat-marketplace-c2dlb\" (UID: \"0d5cdf16-9723-4454-9d50-01be3e7d70cc\") " pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.562380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-catalog-content\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.562449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-utilities\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.562473 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lq6\" (UniqueName: \"kubernetes.io/projected/88a69196-dd0b-4747-b165-b72dcdfa48e4-kube-api-access-68lq6\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.617524 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.663884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lq6\" (UniqueName: \"kubernetes.io/projected/88a69196-dd0b-4747-b165-b72dcdfa48e4-kube-api-access-68lq6\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.664029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-catalog-content\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.664087 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-utilities\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.664760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-utilities\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.664895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88a69196-dd0b-4747-b165-b72dcdfa48e4-catalog-content\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.687594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lq6\" (UniqueName: \"kubernetes.io/projected/88a69196-dd0b-4747-b165-b72dcdfa48e4-kube-api-access-68lq6\") pod \"redhat-operators-rvlfn\" (UID: \"88a69196-dd0b-4747-b165-b72dcdfa48e4\") " pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.697418 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ceab033-9d0f-4e3c-a6ad-1daaef67c864" path="/var/lib/kubelet/pods/0ceab033-9d0f-4e3c-a6ad-1daaef67c864/volumes" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.698078 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e871a1-697e-4373-b40b-81cb18911db0" path="/var/lib/kubelet/pods/19e871a1-697e-4373-b40b-81cb18911db0/volumes" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.700013 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c8458ae-48c5-41fa-95a1-a22d7b0c250c" path="/var/lib/kubelet/pods/5c8458ae-48c5-41fa-95a1-a22d7b0c250c/volumes" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.700633 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe6decf-3a3f-4150-a617-176207930add" path="/var/lib/kubelet/pods/5fe6decf-3a3f-4150-a617-176207930add/volumes" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.701201 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca6b2f8-d259-4c32-9582-d18786e0762a" path="/var/lib/kubelet/pods/7ca6b2f8-d259-4c32-9582-d18786e0762a/volumes" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.782905 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:38 crc kubenswrapper[4856]: I1203 09:16:38.839560 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dlb"] Dec 03 09:16:38 crc kubenswrapper[4856]: W1203 09:16:38.844719 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d5cdf16_9723_4454_9d50_01be3e7d70cc.slice/crio-dd661f72243108a505eeb06985158f8712843b05ece0c978b703728aab561cbe WatchSource:0}: Error finding container dd661f72243108a505eeb06985158f8712843b05ece0c978b703728aab561cbe: Status 404 returned error can't find the container with id dd661f72243108a505eeb06985158f8712843b05ece0c978b703728aab561cbe Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.200370 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvlfn"] Dec 03 09:16:39 crc kubenswrapper[4856]: W1203 09:16:39.207022 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88a69196_dd0b_4747_b165_b72dcdfa48e4.slice/crio-5b7f40337bcbcbccb4b976931c1d7fde630a6a7cbc02b321825b6ff697224bfa WatchSource:0}: Error finding container 5b7f40337bcbcbccb4b976931c1d7fde630a6a7cbc02b321825b6ff697224bfa: Status 404 returned error can't find the container with id 5b7f40337bcbcbccb4b976931c1d7fde630a6a7cbc02b321825b6ff697224bfa Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.516081 4856 generic.go:334] "Generic (PLEG): container finished" podID="88a69196-dd0b-4747-b165-b72dcdfa48e4" containerID="4fb39f1665ae07a64e1a734e97bb4410057fbc9948c617d7bbbe14dc70eab50e" exitCode=0 Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.516186 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvlfn" event={"ID":"88a69196-dd0b-4747-b165-b72dcdfa48e4","Type":"ContainerDied","Data":"4fb39f1665ae07a64e1a734e97bb4410057fbc9948c617d7bbbe14dc70eab50e"} Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.516252 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvlfn" event={"ID":"88a69196-dd0b-4747-b165-b72dcdfa48e4","Type":"ContainerStarted","Data":"5b7f40337bcbcbccb4b976931c1d7fde630a6a7cbc02b321825b6ff697224bfa"} Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.518863 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" containerID="bde02ce1d7fa1988d8b7d24e24def14094ed812d7ed1b28ccd1909c2a31e1939" exitCode=0 Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.518977 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dlb" event={"ID":"0d5cdf16-9723-4454-9d50-01be3e7d70cc","Type":"ContainerDied","Data":"bde02ce1d7fa1988d8b7d24e24def14094ed812d7ed1b28ccd1909c2a31e1939"} Dec 03 09:16:39 crc kubenswrapper[4856]: I1203 09:16:39.519004 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dlb" event={"ID":"0d5cdf16-9723-4454-9d50-01be3e7d70cc","Type":"ContainerStarted","Data":"dd661f72243108a505eeb06985158f8712843b05ece0c978b703728aab561cbe"} Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.530627 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" containerID="f644abd3fa3bc8fdcd8e30122bb087d6c3bdbeead3bb9cf83822d434f69a4ec0" exitCode=0 Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.530726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dlb" event={"ID":"0d5cdf16-9723-4454-9d50-01be3e7d70cc","Type":"ContainerDied","Data":"f644abd3fa3bc8fdcd8e30122bb087d6c3bdbeead3bb9cf83822d434f69a4ec0"} Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.542987 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvlfn" event={"ID":"88a69196-dd0b-4747-b165-b72dcdfa48e4","Type":"ContainerStarted","Data":"20fa70205fb10c32e08a69195ba3f8d9447ef8dbef65b2f71a27d0238cf201b1"} Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.666053 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rk78k"] Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.667696 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.670297 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.679996 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk78k"] Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.790522 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-utilities\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.790990 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-catalog-content\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.791028 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxkj\" (UniqueName: \"kubernetes.io/projected/65a38bf6-9f1d-45db-a007-c04bae553534-kube-api-access-fnxkj\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.868151 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.869222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.871448 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.881521 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.891876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-utilities\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.891951 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-catalog-content\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.891977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnxkj\" (UniqueName: \"kubernetes.io/projected/65a38bf6-9f1d-45db-a007-c04bae553534-kube-api-access-fnxkj\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.892650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-utilities\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.892885 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a38bf6-9f1d-45db-a007-c04bae553534-catalog-content\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.919665 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnxkj\" (UniqueName: \"kubernetes.io/projected/65a38bf6-9f1d-45db-a007-c04bae553534-kube-api-access-fnxkj\") pod \"community-operators-rk78k\" (UID: \"65a38bf6-9f1d-45db-a007-c04bae553534\") " pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.986576 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.993699 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phfkn\" (UniqueName: \"kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.993754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:40 crc kubenswrapper[4856]: I1203 09:16:40.993782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.095405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phfkn\" (UniqueName: \"kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.096711 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.096782 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.097644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.097711 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.124548 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phfkn\" (UniqueName: \"kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn\") pod \"certified-operators-6hdmg\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.201324 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk78k"] Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.204635 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.419919 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.553875 4856 generic.go:334] "Generic (PLEG): container finished" podID="88a69196-dd0b-4747-b165-b72dcdfa48e4" containerID="20fa70205fb10c32e08a69195ba3f8d9447ef8dbef65b2f71a27d0238cf201b1" exitCode=0 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.554058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvlfn" event={"ID":"88a69196-dd0b-4747-b165-b72dcdfa48e4","Type":"ContainerDied","Data":"20fa70205fb10c32e08a69195ba3f8d9447ef8dbef65b2f71a27d0238cf201b1"} Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.556075 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerStarted","Data":"5f63918a4c78de7c5c36040544104b4d101216965a28c9edd41f07c34404ca29"} Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.560968 4856 generic.go:334] "Generic (PLEG): container finished" podID="65a38bf6-9f1d-45db-a007-c04bae553534" containerID="a037314d9ea87243eade84dee3874b9d5e759c0d98c7e177cd7dd787b0b52e37" exitCode=0 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.561559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk78k" event={"ID":"65a38bf6-9f1d-45db-a007-c04bae553534","Type":"ContainerDied","Data":"a037314d9ea87243eade84dee3874b9d5e759c0d98c7e177cd7dd787b0b52e37"} Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.561722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk78k" event={"ID":"65a38bf6-9f1d-45db-a007-c04bae553534","Type":"ContainerStarted","Data":"bdb53b31e8641d3208877d4ff601d00549652bd02b78a7d86881bcd2b3858ec0"} Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.564577 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dlb" event={"ID":"0d5cdf16-9723-4454-9d50-01be3e7d70cc","Type":"ContainerStarted","Data":"9510b73859ebdfc875d6ca64c8e6d50945f93241007098b61ad2cb71d11bdfe1"} Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.595763 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2dlb" podStartSLOduration=2.165450651 podStartE2EDuration="3.595743563s" podCreationTimestamp="2025-12-03 09:16:38 +0000 UTC" firstStartedPulling="2025-12-03 09:16:39.520562093 +0000 UTC m=+267.703454394" lastFinishedPulling="2025-12-03 09:16:40.950855005 +0000 UTC m=+269.133747306" observedRunningTime="2025-12-03 09:16:41.592450502 +0000 UTC m=+269.775342813" watchObservedRunningTime="2025-12-03 09:16:41.595743563 +0000 UTC m=+269.778635864" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.715761 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.716727 4856 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.716977 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.717101 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5" gracePeriod=15 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.717139 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d" gracePeriod=15 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.717157 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551" gracePeriod=15 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.717113 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541" gracePeriod=15 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.717248 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34" gracePeriod=15 Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.720697 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721015 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721033 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721045 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721053 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721064 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721070 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721083 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721090 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721100 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721106 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721120 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721126 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721254 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721265 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721277 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721289 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721298 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 09:16:41 crc kubenswrapper[4856]: E1203 09:16:41.721395 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721403 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.721511 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.765978 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807301 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807441 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807467 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807488 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807522 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.807542 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908419 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908518 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908554 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908579 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908616 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908637 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908789 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908842 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908884 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908941 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:41 crc kubenswrapper[4856]: I1203 09:16:41.908963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.065254 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:16:42 crc kubenswrapper[4856]: W1203 09:16:42.098437 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ee141c5710ece41194134b269bf588bfd501038223fa21fa64ff78066579288b WatchSource:0}: Error finding container ee141c5710ece41194134b269bf588bfd501038223fa21fa64ff78066579288b: Status 404 returned error can't find the container with id ee141c5710ece41194134b269bf588bfd501038223fa21fa64ff78066579288b Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.574101 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.576269 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.576992 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551" exitCode=0 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.577021 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d" exitCode=0 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.577032 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541" exitCode=0 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.577044 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34" exitCode=2 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.577078 4856 scope.go:117] "RemoveContainer" containerID="8e4358aaa851eea694fed4f446a8db2dbbec2843dd8f8a3270d2aa30ebcc3207" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.579289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk78k" event={"ID":"65a38bf6-9f1d-45db-a007-c04bae553534","Type":"ContainerStarted","Data":"436bbf6e1c01dfe8945319ca966e9926b4d43be6d1d8b0a38bfeb679f362890a"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.580523 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3d17478acfbbf7b66b5cddb8d9bdf9c4c1339fa73c664752764b70df0f0a17fd"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.580572 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ee141c5710ece41194134b269bf588bfd501038223fa21fa64ff78066579288b"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.580634 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.580879 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.581098 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.581347 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.581614 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.581888 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.583111 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvlfn" event={"ID":"88a69196-dd0b-4747-b165-b72dcdfa48e4","Type":"ContainerStarted","Data":"a8854780ed79bcfcb52a1d41f233f6dd894ce28e9a974720f80988f0f4c48aeb"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.583705 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.584199 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.584774 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.584979 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c211f61-54da-4cdd-b183-dcef0330433c" containerID="9140198d35ca16081a53136b2d5e067824a2a5591510f2f2f8a55a5d3d2d8a0c" exitCode=0 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.585051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerDied","Data":"9140198d35ca16081a53136b2d5e067824a2a5591510f2f2f8a55a5d3d2d8a0c"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.586569 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.587970 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.588559 4856 generic.go:334] "Generic (PLEG): container finished" podID="21e5cefc-7eaf-45bc-919c-854583e63aba" containerID="0e6f203099001c346947f3ca27a7895e6857f428717968b057f3a3c709030a55" exitCode=0 Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.588639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21e5cefc-7eaf-45bc-919c-854583e63aba","Type":"ContainerDied","Data":"0e6f203099001c346947f3ca27a7895e6857f428717968b057f3a3c709030a55"} Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.589234 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.589601 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.590148 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.591499 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.591903 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.592203 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.592664 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.592964 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.593197 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.593439 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.691749 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.692335 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.692485 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.692636 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.692787 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:42 crc kubenswrapper[4856]: I1203 09:16:42.692970 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.596532 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerStarted","Data":"7cd2e50b446c01a0ab9a8b39b8d4cbc9132ad8f7f1ad9674559c279a9f47dcf1"} Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.597605 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.598121 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.598561 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.598783 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.599016 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.600006 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.602630 4856 generic.go:334] "Generic (PLEG): container finished" podID="65a38bf6-9f1d-45db-a007-c04bae553534" containerID="436bbf6e1c01dfe8945319ca966e9926b4d43be6d1d8b0a38bfeb679f362890a" exitCode=0 Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.603874 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk78k" event={"ID":"65a38bf6-9f1d-45db-a007-c04bae553534","Type":"ContainerDied","Data":"436bbf6e1c01dfe8945319ca966e9926b4d43be6d1d8b0a38bfeb679f362890a"} Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.604973 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.605216 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.605448 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.605653 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.605857 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.900554 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.901587 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.901762 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.902106 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.902396 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:43 crc kubenswrapper[4856]: I1203 09:16:43.903912 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.041778 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access\") pod \"21e5cefc-7eaf-45bc-919c-854583e63aba\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.042877 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock\") pod \"21e5cefc-7eaf-45bc-919c-854583e63aba\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.042955 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock" (OuterVolumeSpecName: "var-lock") pod "21e5cefc-7eaf-45bc-919c-854583e63aba" (UID: "21e5cefc-7eaf-45bc-919c-854583e63aba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.042914 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir\") pod \"21e5cefc-7eaf-45bc-919c-854583e63aba\" (UID: \"21e5cefc-7eaf-45bc-919c-854583e63aba\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.043137 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21e5cefc-7eaf-45bc-919c-854583e63aba" (UID: "21e5cefc-7eaf-45bc-919c-854583e63aba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.043547 4856 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.043576 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21e5cefc-7eaf-45bc-919c-854583e63aba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.046969 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21e5cefc-7eaf-45bc-919c-854583e63aba" (UID: "21e5cefc-7eaf-45bc-919c-854583e63aba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.145288 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e5cefc-7eaf-45bc-919c-854583e63aba-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.611020 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c211f61-54da-4cdd-b183-dcef0330433c" containerID="7cd2e50b446c01a0ab9a8b39b8d4cbc9132ad8f7f1ad9674559c279a9f47dcf1" exitCode=0 Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.611135 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerDied","Data":"7cd2e50b446c01a0ab9a8b39b8d4cbc9132ad8f7f1ad9674559c279a9f47dcf1"} Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.613489 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.614245 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.614611 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.614621 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.614663 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21e5cefc-7eaf-45bc-919c-854583e63aba","Type":"ContainerDied","Data":"93f4e27443ff5d15b2c1ffe2f8896c12175b69a1e7e352b72f3f83b141349df1"} Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.615094 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f4e27443ff5d15b2c1ffe2f8896c12175b69a1e7e352b72f3f83b141349df1" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.615206 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.615514 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.619224 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.619798 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5" exitCode=0 Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.622671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk78k" event={"ID":"65a38bf6-9f1d-45db-a007-c04bae553534","Type":"ContainerStarted","Data":"3944ab9c8981ccc290e94a88cf6b2920943b889f618a1bed76a2169efaf34c03"} Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.623273 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.623518 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.623797 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.624004 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.624214 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.632753 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.633059 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.633500 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.633707 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.633891 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.747757 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.749032 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.750300 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.750689 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.750990 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.751335 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.751676 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.752130 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.858660 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.858743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.858791 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.858877 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.858915 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.859153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.859158 4856 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.859210 4856 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:44 crc kubenswrapper[4856]: I1203 09:16:44.960250 4856 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:16:45 crc kubenswrapper[4856]: I1203 09:16:45.632160 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 09:16:45 crc kubenswrapper[4856]: I1203 09:16:45.633334 4856 scope.go:117] "RemoveContainer" containerID="95b3edc64f1b5b3a75ecd97e4de94a67f0d59d3904b552cb0339cec339bab551" Dec 03 09:16:45 crc kubenswrapper[4856]: I1203 09:16:45.633380 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.127288 4856 scope.go:117] "RemoveContainer" containerID="4f77af74e7e1ec53bba8606838dd56731829339c854a33522e040765a869ca9d" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.138704 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.139128 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.139416 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.139672 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.139892 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.140346 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.144421 4856 scope.go:117] "RemoveContainer" containerID="7c4d630630ba42fbe72bb2a5a28f28663add49cefdd629ccc3e57b2cda524541" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.181867 4856 scope.go:117] "RemoveContainer" containerID="31a254d6739f78abd780905330eb3a2ca53f594e8b1cc6d29d0eb9559589cd34" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.199419 4856 scope.go:117] "RemoveContainer" containerID="eeee4bc4d49b8d3006882030f4ae928160d313f715a913ce8b1a2478617fa9d5" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.222854 4856 scope.go:117] "RemoveContainer" containerID="226bb7cecf10ac9c54b02dd5c2e95616869d915dcb142c06e5c1264e529bd32e" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.641206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerStarted","Data":"cd5ca6b7499a8d90a15acb54c7c443caf39e216ffe20d2e8d42645c87e7ae3b9"} Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.642168 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.642472 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.642866 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.643080 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.643250 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.643437 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:46 crc kubenswrapper[4856]: I1203 09:16:46.696882 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 09:16:46 crc kubenswrapper[4856]: E1203 09:16:46.803583 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6hdmg.187da9de6e905222 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6hdmg,UID:9c211f61-54da-4cdd-b183-dcef0330433c,APIVersion:v1,ResourceVersion:29516,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container extract-utilities,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,LastTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 09:16:48 crc kubenswrapper[4856]: E1203 09:16:48.131180 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6hdmg.187da9de6e905222 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6hdmg,UID:9c211f61-54da-4cdd-b183-dcef0330433c,APIVersion:v1,ResourceVersion:29516,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container extract-utilities,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,LastTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.619606 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.619666 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.661292 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.662260 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.662664 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.663034 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.663342 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.663598 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.663847 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.703610 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2dlb" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.704399 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.705148 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.705649 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.706004 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.706310 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.706625 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: E1203 09:16:48.763215 4856 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" volumeName="registry-storage" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.783561 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.783690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.823441 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.824250 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.824744 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.825233 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.825556 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.825932 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:48 crc kubenswrapper[4856]: I1203 09:16:48.826510 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.699674 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvlfn" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.700400 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.700720 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.701177 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.701671 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.702061 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:49 crc kubenswrapper[4856]: I1203 09:16:49.702398 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.873824 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.874255 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.874528 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.874837 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.875144 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:50 crc kubenswrapper[4856]: I1203 09:16:50.875187 4856 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 09:16:50 crc kubenswrapper[4856]: E1203 09:16:50.875537 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Dec 03 09:16:50 crc kubenswrapper[4856]: I1203 09:16:50.987574 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:50 crc kubenswrapper[4856]: I1203 09:16:50.987612 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.030276 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031083 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031334 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031515 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031654 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031793 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.031960 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: E1203 09:16:51.076819 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.205559 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.205910 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.246968 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.247768 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.248275 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.248971 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.249461 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.249886 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.250214 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: E1203 09:16:51.477458 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.712791 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.713395 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.714160 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.714566 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.714915 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.714984 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rk78k" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.715209 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.715553 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.716030 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.716347 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.716672 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.716992 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.717293 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:51 crc kubenswrapper[4856]: I1203 09:16:51.717613 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: E1203 09:16:52.278486 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.693484 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.694875 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.696065 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.696473 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.696677 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:52 crc kubenswrapper[4856]: I1203 09:16:52.697019 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:53 crc kubenswrapper[4856]: E1203 09:16:53.880149 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="3.2s" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.688224 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.689280 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.689690 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.690113 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.690418 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.690846 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.691422 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.704529 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.704562 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:16:58 crc kubenswrapper[4856]: E1203 09:16:55.705368 4856 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:55.705921 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:56.698207 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be53174eb7ba373bdbd62086bf777f61dd107b0697067da75d47eef787fa596b"} Dec 03 09:16:58 crc kubenswrapper[4856]: E1203 09:16:57.082073 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="6.4s" Dec 03 09:16:58 crc kubenswrapper[4856]: E1203 09:16:58.132929 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6hdmg.187da9de6e905222 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6hdmg,UID:9c211f61-54da-4cdd-b183-dcef0330433c,APIVersion:v1,ResourceVersion:29516,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container extract-utilities,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,LastTimestamp:2025-12-03 09:16:41.80188829 +0000 UTC m=+269.984780591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.659601 4856 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.659726 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.710656 4856 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c5e941b07bb8bfbd470e0e0cac2bff741eb53f4a61ecc1f23851ac9402e9da58" exitCode=0 Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.710980 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.710995 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.711129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c5e941b07bb8bfbd470e0e0cac2bff741eb53f4a61ecc1f23851ac9402e9da58"} Dec 03 09:16:58 crc kubenswrapper[4856]: E1203 09:16:58.711590 4856 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.713427 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.714008 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.714450 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.714735 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.715010 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.715429 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.716575 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.716643 4856 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843" exitCode=1 Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.716672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843"} Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.717313 4856 scope.go:117] "RemoveContainer" containerID="ad183cfc7bb38f85c751025c4a6ea8609352356dfc405e556d66342ccf556843" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.718667 4856 status_manager.go:851] "Failed to get status for pod" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" pod="openshift-marketplace/certified-operators-6hdmg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6hdmg\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.719383 4856 status_manager.go:851] "Failed to get status for pod" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.720003 4856 status_manager.go:851] "Failed to get status for pod" podUID="88a69196-dd0b-4747-b165-b72dcdfa48e4" pod="openshift-marketplace/redhat-operators-rvlfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rvlfn\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.720403 4856 status_manager.go:851] "Failed to get status for pod" podUID="0d5cdf16-9723-4454-9d50-01be3e7d70cc" pod="openshift-marketplace/redhat-marketplace-c2dlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2dlb\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.720663 4856 status_manager.go:851] "Failed to get status for pod" podUID="65a38bf6-9f1d-45db-a007-c04bae553534" pod="openshift-marketplace/community-operators-rk78k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rk78k\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.720910 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:58 crc kubenswrapper[4856]: I1203 09:16:58.721226 4856 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.588989 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerName="oauth-openshift" containerID="cri-o://189e8ea22f4eca31a3dac4d5d205476c446f67e456523aaa16b43e8bbaaf0564" gracePeriod=15 Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.724544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c2d195c90a3b3c0d0f7dd5e6226a47a3bdaed5d319cacdddce1f0f8662af05b4"} Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.724885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32ec1f07ab06d44e535adcfd7a7f1ef9ab58a3de46eca2834ae7640588b11325"} Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.724899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"305a799272f8dc2da25a3d7c7fa254b2460648eb698d5caead1c35e6ad520d61"} Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.726775 4856 generic.go:334] "Generic (PLEG): container finished" podID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerID="189e8ea22f4eca31a3dac4d5d205476c446f67e456523aaa16b43e8bbaaf0564" exitCode=0 Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.726855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" event={"ID":"61769af3-d6a3-42b8-916e-bd4f05ae6b55","Type":"ContainerDied","Data":"189e8ea22f4eca31a3dac4d5d205476c446f67e456523aaa16b43e8bbaaf0564"} Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.731377 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 09:16:59 crc kubenswrapper[4856]: I1203 09:16:59.731469 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"febb0b6497af3654dd3f008ca48fc5477f98e7a1c89e59aa00a65a1a46a02a9f"} Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.179177 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.300584 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301192 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301273 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301308 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301326 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301373 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301416 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301461 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301484 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301520 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301596 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301612 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301642 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.301671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies\") pod \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\" (UID: \"61769af3-d6a3-42b8-916e-bd4f05ae6b55\") " Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.302719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.302766 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.303092 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.304491 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.305043 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.308866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.309258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5" (OuterVolumeSpecName: "kube-api-access-4n7m5") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "kube-api-access-4n7m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.309465 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.309708 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.310087 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.310383 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.310455 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.311696 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.316956 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "61769af3-d6a3-42b8-916e-bd4f05ae6b55" (UID: "61769af3-d6a3-42b8-916e-bd4f05ae6b55"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403286 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403342 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403354 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403370 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403382 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403399 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403412 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403422 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403437 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403447 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/61769af3-d6a3-42b8-916e-bd4f05ae6b55-kube-api-access-4n7m5\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403460 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403470 4856 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61769af3-d6a3-42b8-916e-bd4f05ae6b55-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403479 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.403491 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61769af3-d6a3-42b8-916e-bd4f05ae6b55-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.741133 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" event={"ID":"61769af3-d6a3-42b8-916e-bd4f05ae6b55","Type":"ContainerDied","Data":"aa5d9e803f195d0f8f75d3d3c0049d7ec8e3d61f8514a1436fc51e594d179231"} Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.741206 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2p8h7" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.741247 4856 scope.go:117] "RemoveContainer" containerID="189e8ea22f4eca31a3dac4d5d205476c446f67e456523aaa16b43e8bbaaf0564" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.746278 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe7c7d1c5a4708936f3534a7176f22bab50c402c94f61fc9af83d9e1f45c2841"} Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.746338 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1ce32dc2fa3f0075ea690979c961d1bc6058d0eb2bc38db531052d0ee458c385"} Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.746501 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.746655 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:00 crc kubenswrapper[4856]: I1203 09:17:00.746701 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:02 crc kubenswrapper[4856]: I1203 09:17:02.362605 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:17:02 crc kubenswrapper[4856]: I1203 09:17:02.367143 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:17:02 crc kubenswrapper[4856]: I1203 09:17:02.759330 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.713699 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.714099 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.714122 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.756155 4856 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.779615 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.779645 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.781738 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dc626c05-89e9-4284-ab12-5f7c4f772553" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.783751 4856 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://305a799272f8dc2da25a3d7c7fa254b2460648eb698d5caead1c35e6ad520d61" Dec 03 09:17:05 crc kubenswrapper[4856]: I1203 09:17:05.783769 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:06 crc kubenswrapper[4856]: I1203 09:17:06.785299 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:06 crc kubenswrapper[4856]: I1203 09:17:06.785351 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:06 crc kubenswrapper[4856]: I1203 09:17:06.789980 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dc626c05-89e9-4284-ab12-5f7c4f772553" Dec 03 09:17:15 crc kubenswrapper[4856]: I1203 09:17:15.183185 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 09:17:15 crc kubenswrapper[4856]: I1203 09:17:15.901031 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 09:17:16 crc kubenswrapper[4856]: I1203 09:17:16.476305 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 09:17:16 crc kubenswrapper[4856]: I1203 09:17:16.820650 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 09:17:16 crc kubenswrapper[4856]: I1203 09:17:16.915422 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 09:17:17 crc kubenswrapper[4856]: I1203 09:17:17.384366 4856 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 09:17:17 crc kubenswrapper[4856]: I1203 09:17:17.563587 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 09:17:17 crc kubenswrapper[4856]: I1203 09:17:17.604276 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 09:17:17 crc kubenswrapper[4856]: I1203 09:17:17.624579 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.033049 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.153656 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.475354 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.568299 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.663001 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.683686 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.756111 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.785277 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.892590 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 09:17:18 crc kubenswrapper[4856]: I1203 09:17:18.945161 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.040650 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.080520 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.210688 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.333834 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.418150 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.443296 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.619175 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.637638 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.717607 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.795163 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.873091 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.921447 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.961840 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 09:17:19 crc kubenswrapper[4856]: I1203 09:17:19.966247 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.210465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.333040 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.360025 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.375344 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.400484 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.419480 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.510361 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.697478 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.771174 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.796303 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.796831 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.831685 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.845900 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.887680 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 09:17:20 crc kubenswrapper[4856]: I1203 09:17:20.954921 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.028580 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.036663 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.113791 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.138015 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.178284 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.206563 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.354414 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.363386 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.418238 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.426609 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.432641 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.637877 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.652418 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.826896 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.835560 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.877746 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.960607 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 09:17:21 crc kubenswrapper[4856]: I1203 09:17:21.976988 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.062832 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.071309 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.076261 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.092782 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.169337 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.188139 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.238525 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.339559 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.376058 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.675942 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.691148 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.742551 4856 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.760725 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.854412 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.869080 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.902758 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.944703 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 09:17:22 crc kubenswrapper[4856]: I1203 09:17:22.976746 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.038088 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.079441 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.236481 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.255783 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.310432 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.325277 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.480981 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.485920 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.618945 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.627390 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.688849 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.738617 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.742361 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.745321 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.811868 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.816164 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.831737 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.968769 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 09:17:23 crc kubenswrapper[4856]: I1203 09:17:23.997606 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.006309 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.067996 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.068595 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.075902 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.112742 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.116424 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.130036 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.160035 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.188268 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.188843 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hdmg" podStartSLOduration=41.764317701 podStartE2EDuration="44.188828367s" podCreationTimestamp="2025-12-03 09:16:40 +0000 UTC" firstStartedPulling="2025-12-03 09:16:42.586641166 +0000 UTC m=+270.769533467" lastFinishedPulling="2025-12-03 09:16:45.011151832 +0000 UTC m=+273.194044133" observedRunningTime="2025-12-03 09:17:05.576264262 +0000 UTC m=+293.759156563" watchObservedRunningTime="2025-12-03 09:17:24.188828367 +0000 UTC m=+312.371720668" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.189078 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.189075055 podStartE2EDuration="43.189075055s" podCreationTimestamp="2025-12-03 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:17:05.68750371 +0000 UTC m=+293.870396011" watchObservedRunningTime="2025-12-03 09:17:24.189075055 +0000 UTC m=+312.371967346" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.190516 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rk78k" podStartSLOduration=41.693547814 podStartE2EDuration="44.190510968s" podCreationTimestamp="2025-12-03 09:16:40 +0000 UTC" firstStartedPulling="2025-12-03 09:16:41.562699506 +0000 UTC m=+269.745591807" lastFinishedPulling="2025-12-03 09:16:44.05966266 +0000 UTC m=+272.242554961" observedRunningTime="2025-12-03 09:17:05.676316839 +0000 UTC m=+293.859209140" watchObservedRunningTime="2025-12-03 09:17:24.190510968 +0000 UTC m=+312.373403269" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.191573 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvlfn" podStartSLOduration=43.758414171 podStartE2EDuration="46.19156638s" podCreationTimestamp="2025-12-03 09:16:38 +0000 UTC" firstStartedPulling="2025-12-03 09:16:39.518574263 +0000 UTC m=+267.701466564" lastFinishedPulling="2025-12-03 09:16:41.951726472 +0000 UTC m=+270.134618773" observedRunningTime="2025-12-03 09:17:05.63726397 +0000 UTC m=+293.820156271" watchObservedRunningTime="2025-12-03 09:17:24.19156638 +0000 UTC m=+312.374458681" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.203666 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2p8h7","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.203915 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.205197 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.205231 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5c98fdc8-8cd3-4fa2-a733-6427e4052ef2" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.242495 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.253745 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.253714443 podStartE2EDuration="19.253714443s" podCreationTimestamp="2025-12-03 09:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:17:24.252104224 +0000 UTC m=+312.434996535" watchObservedRunningTime="2025-12-03 09:17:24.253714443 +0000 UTC m=+312.436606744" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.297658 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.382945 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.437825 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.500677 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.532246 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.543653 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.554660 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.637403 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.686338 4856 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.696833 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" path="/var/lib/kubelet/pods/61769af3-d6a3-42b8-916e-bd4f05ae6b55/volumes" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.875872 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.924410 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 09:17:24 crc kubenswrapper[4856]: I1203 09:17:24.967206 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.087973 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.095925 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.162321 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.260749 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.452588 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.490905 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.550862 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.588724 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.596547 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.660863 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.847333 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 09:17:25 crc kubenswrapper[4856]: I1203 09:17:25.881592 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.003676 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.042202 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.148461 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.240518 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.241480 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.332277 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.507823 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.575046 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.655718 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.660673 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.738751 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.765620 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.879430 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.930307 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.978941 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.979345 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 09:17:26 crc kubenswrapper[4856]: I1203 09:17:26.998283 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.033218 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.090023 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.102162 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.120839 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.261131 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.354341 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.355279 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.432852 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.453242 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.457797 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.696544 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.698125 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.762157 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.776455 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.813642 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.820963 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 09:17:27 crc kubenswrapper[4856]: I1203 09:17:27.974004 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.051944 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.123800 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.136705 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.197306 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.202580 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.293678 4856 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.293998 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3d17478acfbbf7b66b5cddb8d9bdf9c4c1339fa73c664752764b70df0f0a17fd" gracePeriod=5 Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.395966 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.475427 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.529550 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.587873 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.696742 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.737702 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.758237 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.785922 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.936655 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.977198 4856 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 09:17:28 crc kubenswrapper[4856]: I1203 09:17:28.978485 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.005071 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.082460 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.097340 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.113726 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.213359 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.215314 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.215994 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.249661 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.420268 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.761754 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 09:17:29 crc kubenswrapper[4856]: I1203 09:17:29.814229 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.036594 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.249001 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.326498 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.415354 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.531391 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.534117 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.586328 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.619677 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.741211 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 09:17:30 crc kubenswrapper[4856]: I1203 09:17:30.937604 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 09:17:31 crc kubenswrapper[4856]: I1203 09:17:31.169292 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 09:17:31 crc kubenswrapper[4856]: I1203 09:17:31.250179 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 09:17:31 crc kubenswrapper[4856]: I1203 09:17:31.421129 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 09:17:31 crc kubenswrapper[4856]: I1203 09:17:31.668895 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 09:17:31 crc kubenswrapper[4856]: I1203 09:17:31.901467 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 09:17:33 crc kubenswrapper[4856]: I1203 09:17:33.968904 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 09:17:33 crc kubenswrapper[4856]: I1203 09:17:33.968964 4856 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3d17478acfbbf7b66b5cddb8d9bdf9c4c1339fa73c664752764b70df0f0a17fd" exitCode=137 Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.173933 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.174054 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259212 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259289 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259347 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259400 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259487 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259838 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.259883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.260038 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.260071 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.267953 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.360402 4856 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.360439 4856 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.360451 4856 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.360467 4856 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.360479 4856 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.698496 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.699391 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.715910 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.716344 4856 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="dcd71a1f-cf4e-483b-acc6-7406a17b89e8" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.720105 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.720155 4856 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="dcd71a1f-cf4e-483b-acc6-7406a17b89e8" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.975945 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.976017 4856 scope.go:117] "RemoveContainer" containerID="3d17478acfbbf7b66b5cddb8d9bdf9c4c1339fa73c664752764b70df0f0a17fd" Dec 03 09:17:34 crc kubenswrapper[4856]: I1203 09:17:34.976121 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.673337 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-hlvrt"] Dec 03 09:17:41 crc kubenswrapper[4856]: E1203 09:17:41.674082 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerName="oauth-openshift" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674191 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerName="oauth-openshift" Dec 03 09:17:41 crc kubenswrapper[4856]: E1203 09:17:41.674219 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" containerName="installer" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674225 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" containerName="installer" Dec 03 09:17:41 crc kubenswrapper[4856]: E1203 09:17:41.674238 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674244 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674336 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="61769af3-d6a3-42b8-916e-bd4f05ae6b55" containerName="oauth-openshift" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674350 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674362 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e5cefc-7eaf-45bc-919c-854583e63aba" containerName="installer" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.674723 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.678566 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.679036 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.679281 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.679772 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.680016 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.680540 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.680798 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.681535 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.681681 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.682521 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.682632 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.683981 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.689381 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.694938 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-hlvrt"] Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.698486 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.704266 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.766868 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-audit-policies\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.766970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767041 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnwb\" (UniqueName: \"kubernetes.io/projected/18b4d91e-4387-4304-939c-3dfea30559db-kube-api-access-7hnwb\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767132 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767204 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767279 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767309 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767391 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4d91e-4387-4304-939c-3dfea30559db-audit-dir\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767458 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.767481 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869656 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-audit-policies\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869839 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnwb\" (UniqueName: \"kubernetes.io/projected/18b4d91e-4387-4304-939c-3dfea30559db-kube-api-access-7hnwb\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869880 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.869969 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.870029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.870065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.870302 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.870351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.870414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.871500 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-audit-policies\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.872508 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4d91e-4387-4304-939c-3dfea30559db-audit-dir\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.872951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.873234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4d91e-4387-4304-939c-3dfea30559db-audit-dir\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.873323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.873371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.874478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.875604 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.878440 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.878662 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.878793 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.879484 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.879768 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.880614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.886100 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.886894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4d91e-4387-4304-939c-3dfea30559db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.896981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnwb\" (UniqueName: \"kubernetes.io/projected/18b4d91e-4387-4304-939c-3dfea30559db-kube-api-access-7hnwb\") pod \"oauth-openshift-586d5b9769-hlvrt\" (UID: \"18b4d91e-4387-4304-939c-3dfea30559db\") " pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:41 crc kubenswrapper[4856]: I1203 09:17:41.996577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:42 crc kubenswrapper[4856]: I1203 09:17:42.200586 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-hlvrt"] Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.025026 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" event={"ID":"18b4d91e-4387-4304-939c-3dfea30559db","Type":"ContainerStarted","Data":"853f55f1b7fbe1ba47f23c64e17c7e682857c7d63170063859b7d14755586efd"} Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.025582 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.025597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" event={"ID":"18b4d91e-4387-4304-939c-3dfea30559db","Type":"ContainerStarted","Data":"50bb6d9d658d5e00de026d4ecc66f4bc5f20f373815ffd2e4c75c0fd8ae6d39e"} Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.032046 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.051768 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-586d5b9769-hlvrt" podStartSLOduration=69.051712552 podStartE2EDuration="1m9.051712552s" podCreationTimestamp="2025-12-03 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:17:43.048768019 +0000 UTC m=+331.231660320" watchObservedRunningTime="2025-12-03 09:17:43.051712552 +0000 UTC m=+331.234604843" Dec 03 09:17:43 crc kubenswrapper[4856]: I1203 09:17:43.226245 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 09:17:46 crc kubenswrapper[4856]: I1203 09:17:46.890931 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 09:17:47 crc kubenswrapper[4856]: I1203 09:17:47.135187 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 09:17:48 crc kubenswrapper[4856]: I1203 09:17:48.122864 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 09:17:48 crc kubenswrapper[4856]: I1203 09:17:48.260712 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 09:17:49 crc kubenswrapper[4856]: I1203 09:17:49.584355 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 09:17:51 crc kubenswrapper[4856]: I1203 09:17:51.506782 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 09:17:51 crc kubenswrapper[4856]: I1203 09:17:51.723251 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 09:17:52 crc kubenswrapper[4856]: I1203 09:17:52.013542 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 09:17:52 crc kubenswrapper[4856]: I1203 09:17:52.042908 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 09:17:52 crc kubenswrapper[4856]: I1203 09:17:52.146529 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 09:17:53 crc kubenswrapper[4856]: I1203 09:17:53.817352 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 09:17:56 crc kubenswrapper[4856]: I1203 09:17:56.186668 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 09:17:58 crc kubenswrapper[4856]: I1203 09:17:58.966438 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 09:18:00 crc kubenswrapper[4856]: I1203 09:18:00.099265 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 09:18:00 crc kubenswrapper[4856]: I1203 09:18:00.240343 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 09:18:00 crc kubenswrapper[4856]: I1203 09:18:00.560573 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 09:18:00 crc kubenswrapper[4856]: I1203 09:18:00.722877 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 09:18:01 crc kubenswrapper[4856]: I1203 09:18:01.970370 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 09:18:03 crc kubenswrapper[4856]: I1203 09:18:03.455749 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 09:18:03 crc kubenswrapper[4856]: I1203 09:18:03.558035 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 09:18:04 crc kubenswrapper[4856]: I1203 09:18:04.866032 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 09:18:06 crc kubenswrapper[4856]: I1203 09:18:06.327177 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 09:18:06 crc kubenswrapper[4856]: I1203 09:18:06.652049 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 09:18:07 crc kubenswrapper[4856]: I1203 09:18:07.982323 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 09:18:08 crc kubenswrapper[4856]: I1203 09:18:08.459413 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 09:18:22 crc kubenswrapper[4856]: I1203 09:18:22.759607 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:18:22 crc kubenswrapper[4856]: I1203 09:18:22.760413 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.321991 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.322877 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" containerID="cri-o://dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48" gracePeriod=30 Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.432439 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.432713 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" containerID="cri-o://5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942" gracePeriod=30 Dec 03 09:18:27 crc kubenswrapper[4856]: E1203 09:18:27.608989 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26dd43e6_7cf4_42d1_9639_70d17ccae700.slice/crio-conmon-5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26dd43e6_7cf4_42d1_9639_70d17ccae700.slice/crio-5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942.scope\": RecentStats: unable to find data in memory cache]" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.807204 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.908313 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.963025 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert\") pod \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.963101 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca\") pod \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.963188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config\") pod \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.963252 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh7lw\" (UniqueName: \"kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw\") pod \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.963277 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles\") pod \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\" (UID: \"0ba47662-ffd7-4182-9a06-2f085abcc5e7\") " Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.964168 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0ba47662-ffd7-4182-9a06-2f085abcc5e7" (UID: "0ba47662-ffd7-4182-9a06-2f085abcc5e7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.964413 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ba47662-ffd7-4182-9a06-2f085abcc5e7" (UID: "0ba47662-ffd7-4182-9a06-2f085abcc5e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.965196 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config" (OuterVolumeSpecName: "config") pod "0ba47662-ffd7-4182-9a06-2f085abcc5e7" (UID: "0ba47662-ffd7-4182-9a06-2f085abcc5e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.970095 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw" (OuterVolumeSpecName: "kube-api-access-rh7lw") pod "0ba47662-ffd7-4182-9a06-2f085abcc5e7" (UID: "0ba47662-ffd7-4182-9a06-2f085abcc5e7"). InnerVolumeSpecName "kube-api-access-rh7lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:18:27 crc kubenswrapper[4856]: I1203 09:18:27.970151 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ba47662-ffd7-4182-9a06-2f085abcc5e7" (UID: "0ba47662-ffd7-4182-9a06-2f085abcc5e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.064848 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca\") pod \"26dd43e6-7cf4-42d1-9639-70d17ccae700\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.064959 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config\") pod \"26dd43e6-7cf4-42d1-9639-70d17ccae700\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065037 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert\") pod \"26dd43e6-7cf4-42d1-9639-70d17ccae700\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p54x\" (UniqueName: \"kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x\") pod \"26dd43e6-7cf4-42d1-9639-70d17ccae700\" (UID: \"26dd43e6-7cf4-42d1-9639-70d17ccae700\") " Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065385 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba47662-ffd7-4182-9a06-2f085abcc5e7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065400 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065411 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065422 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh7lw\" (UniqueName: \"kubernetes.io/projected/0ba47662-ffd7-4182-9a06-2f085abcc5e7-kube-api-access-rh7lw\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.065433 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ba47662-ffd7-4182-9a06-2f085abcc5e7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.066216 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca" (OuterVolumeSpecName: "client-ca") pod "26dd43e6-7cf4-42d1-9639-70d17ccae700" (UID: "26dd43e6-7cf4-42d1-9639-70d17ccae700"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.066488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config" (OuterVolumeSpecName: "config") pod "26dd43e6-7cf4-42d1-9639-70d17ccae700" (UID: "26dd43e6-7cf4-42d1-9639-70d17ccae700"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.069151 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x" (OuterVolumeSpecName: "kube-api-access-7p54x") pod "26dd43e6-7cf4-42d1-9639-70d17ccae700" (UID: "26dd43e6-7cf4-42d1-9639-70d17ccae700"). InnerVolumeSpecName "kube-api-access-7p54x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.069424 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26dd43e6-7cf4-42d1-9639-70d17ccae700" (UID: "26dd43e6-7cf4-42d1-9639-70d17ccae700"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.166930 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p54x\" (UniqueName: \"kubernetes.io/projected/26dd43e6-7cf4-42d1-9639-70d17ccae700-kube-api-access-7p54x\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.166998 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.167009 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dd43e6-7cf4-42d1-9639-70d17ccae700-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.167018 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dd43e6-7cf4-42d1-9639-70d17ccae700-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.303217 4856 generic.go:334] "Generic (PLEG): container finished" podID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerID="5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942" exitCode=0 Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.303277 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.303310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" event={"ID":"26dd43e6-7cf4-42d1-9639-70d17ccae700","Type":"ContainerDied","Data":"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942"} Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.303361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f" event={"ID":"26dd43e6-7cf4-42d1-9639-70d17ccae700","Type":"ContainerDied","Data":"69847289fdd61273b13220a5c78555e41a33c5e6642975fa7d024a69e8205fa7"} Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.303381 4856 scope.go:117] "RemoveContainer" containerID="5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.307845 4856 generic.go:334] "Generic (PLEG): container finished" podID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerID="dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48" exitCode=0 Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.307887 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" event={"ID":"0ba47662-ffd7-4182-9a06-2f085abcc5e7","Type":"ContainerDied","Data":"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48"} Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.307912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" event={"ID":"0ba47662-ffd7-4182-9a06-2f085abcc5e7","Type":"ContainerDied","Data":"992e7048e8e9c9730df46cf7df7f6e2414c920cce8a6c2ec45e2ebff4010d6f5"} Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.307997 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fm6l9" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.331221 4856 scope.go:117] "RemoveContainer" containerID="5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942" Dec 03 09:18:28 crc kubenswrapper[4856]: E1203 09:18:28.332593 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942\": container with ID starting with 5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942 not found: ID does not exist" containerID="5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.332666 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942"} err="failed to get container status \"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942\": rpc error: code = NotFound desc = could not find container \"5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942\": container with ID starting with 5af89e31099c673774ceecd3a6d7da583c386e398683b7aa4a69db16b98eb942 not found: ID does not exist" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.332722 4856 scope.go:117] "RemoveContainer" containerID="dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.343429 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.352080 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k2n8f"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.366039 4856 scope.go:117] "RemoveContainer" containerID="dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.366058 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:18:28 crc kubenswrapper[4856]: E1203 09:18:28.366853 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48\": container with ID starting with dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48 not found: ID does not exist" containerID="dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.366889 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48"} err="failed to get container status \"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48\": rpc error: code = NotFound desc = could not find container \"dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48\": container with ID starting with dbf411024eda4c398d94855df9f45f0ee76d3bbbee79fbaa8cb30448c4da1f48 not found: ID does not exist" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.371281 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fm6l9"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.702766 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" path="/var/lib/kubelet/pods/0ba47662-ffd7-4182-9a06-2f085abcc5e7/volumes" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.703384 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" path="/var/lib/kubelet/pods/26dd43e6-7cf4-42d1-9639-70d17ccae700/volumes" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.706176 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f77d5fb89-qvsq6"] Dec 03 09:18:28 crc kubenswrapper[4856]: E1203 09:18:28.706418 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.706435 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: E1203 09:18:28.706462 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.706469 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.706586 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba47662-ffd7-4182-9a06-2f085abcc5e7" containerName="controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.706615 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="26dd43e6-7cf4-42d1-9639-70d17ccae700" containerName="route-controller-manager" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.707087 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.709860 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.710486 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.710893 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.711183 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.711454 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.711872 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.713792 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.714328 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.721185 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.721420 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.722329 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.731995 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f77d5fb89-qvsq6"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.738736 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.741265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.742632 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.746616 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.747781 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-config\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876250 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904a36b3-642b-4e98-ae31-d872a3fe5c15-serving-cert\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876332 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6bv\" (UniqueName: \"kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876356 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-proxy-ca-bundles\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876381 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-client-ca\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876434 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvd4\" (UniqueName: \"kubernetes.io/projected/904a36b3-642b-4e98-ae31-d872a3fe5c15-kube-api-access-6jvd4\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.876485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.977951 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.978035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-config\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.978070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904a36b3-642b-4e98-ae31-d872a3fe5c15-serving-cert\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.978110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.978146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6bv\" (UniqueName: \"kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.979859 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-config\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.980076 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.980198 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-proxy-ca-bundles\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.980267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-client-ca\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.980302 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvd4\" (UniqueName: \"kubernetes.io/projected/904a36b3-642b-4e98-ae31-d872a3fe5c15-kube-api-access-6jvd4\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.980357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.981075 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.981092 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-client-ca\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.982534 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/904a36b3-642b-4e98-ae31-d872a3fe5c15-proxy-ca-bundles\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.983852 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904a36b3-642b-4e98-ae31-d872a3fe5c15-serving-cert\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:28 crc kubenswrapper[4856]: I1203 09:18:28.986272 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:28.998525 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6bv\" (UniqueName: \"kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv\") pod \"route-controller-manager-7b4bc89c55-frbdm\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.004477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvd4\" (UniqueName: \"kubernetes.io/projected/904a36b3-642b-4e98-ae31-d872a3fe5c15-kube-api-access-6jvd4\") pod \"controller-manager-f77d5fb89-qvsq6\" (UID: \"904a36b3-642b-4e98-ae31-d872a3fe5c15\") " pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.041055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.062412 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.257727 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f77d5fb89-qvsq6"] Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.331539 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:18:29 crc kubenswrapper[4856]: I1203 09:18:29.333823 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" event={"ID":"904a36b3-642b-4e98-ae31-d872a3fe5c15","Type":"ContainerStarted","Data":"61f3a71f8eec9313a493bd42d56ff9d8f9960f582fe5e10e84548971ef033859"} Dec 03 09:18:29 crc kubenswrapper[4856]: W1203 09:18:29.340428 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b1e350e_226c_4245_a6ac_8111175a1777.slice/crio-2d37caee2f38655a710088cd3d41e1310e910656fb1816f9d1faa7c9c5bf73a2 WatchSource:0}: Error finding container 2d37caee2f38655a710088cd3d41e1310e910656fb1816f9d1faa7c9c5bf73a2: Status 404 returned error can't find the container with id 2d37caee2f38655a710088cd3d41e1310e910656fb1816f9d1faa7c9c5bf73a2 Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.343559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" event={"ID":"8b1e350e-226c-4245-a6ac-8111175a1777","Type":"ContainerStarted","Data":"60062281ce609f7bc1bb34efa7a667064ac5c7f228a9172a1905a5468842d4c5"} Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.344039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" event={"ID":"8b1e350e-226c-4245-a6ac-8111175a1777","Type":"ContainerStarted","Data":"2d37caee2f38655a710088cd3d41e1310e910656fb1816f9d1faa7c9c5bf73a2"} Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.344070 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.351825 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.352495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" event={"ID":"904a36b3-642b-4e98-ae31-d872a3fe5c15","Type":"ContainerStarted","Data":"8f58e04f043ba35dedfbbe07b7fdace4872873c90d013326ee51518e50744073"} Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.352800 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.357306 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.366156 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" podStartSLOduration=3.366139894 podStartE2EDuration="3.366139894s" podCreationTimestamp="2025-12-03 09:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:18:30.365311476 +0000 UTC m=+378.548203777" watchObservedRunningTime="2025-12-03 09:18:30.366139894 +0000 UTC m=+378.549032195" Dec 03 09:18:30 crc kubenswrapper[4856]: I1203 09:18:30.386655 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f77d5fb89-qvsq6" podStartSLOduration=3.386630212 podStartE2EDuration="3.386630212s" podCreationTimestamp="2025-12-03 09:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:18:30.385978608 +0000 UTC m=+378.568870899" watchObservedRunningTime="2025-12-03 09:18:30.386630212 +0000 UTC m=+378.569522513" Dec 03 09:18:52 crc kubenswrapper[4856]: I1203 09:18:52.759399 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:18:52 crc kubenswrapper[4856]: I1203 09:18:52.760575 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.628267 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jv9fd"] Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.629063 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.655578 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jv9fd"] Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759565 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-bound-sa-token\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759634 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-certificates\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759697 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759748 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-trusted-ca\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759770 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p676\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-kube-api-access-8p676\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.759903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.760930 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-tls\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.785770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862496 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-tls\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862564 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-bound-sa-token\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-certificates\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-trusted-ca\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862673 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p676\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-kube-api-access-8p676\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.862713 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.863669 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.864218 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-certificates\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.864264 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-trusted-ca\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.870105 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.870115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-registry-tls\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.881628 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-bound-sa-token\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.884503 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p676\" (UniqueName: \"kubernetes.io/projected/635b1e5b-f970-4fdc-b10e-fe39faa1a22c-kube-api-access-8p676\") pod \"image-registry-66df7c8f76-jv9fd\" (UID: \"635b1e5b-f970-4fdc-b10e-fe39faa1a22c\") " pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:54 crc kubenswrapper[4856]: I1203 09:18:54.946549 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:55 crc kubenswrapper[4856]: I1203 09:18:55.370305 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jv9fd"] Dec 03 09:18:55 crc kubenswrapper[4856]: I1203 09:18:55.507669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" event={"ID":"635b1e5b-f970-4fdc-b10e-fe39faa1a22c","Type":"ContainerStarted","Data":"5383f5b0f7573d132abb4424cafc27c5e8ace32f53dda4942e2535275a6c5e12"} Dec 03 09:18:56 crc kubenswrapper[4856]: I1203 09:18:56.515469 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" event={"ID":"635b1e5b-f970-4fdc-b10e-fe39faa1a22c","Type":"ContainerStarted","Data":"a77db3389e4c5230dfc3c565d139fbcca9fc77a8b726dcd8514cce4f1cd612b4"} Dec 03 09:18:56 crc kubenswrapper[4856]: I1203 09:18:56.516065 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:18:56 crc kubenswrapper[4856]: I1203 09:18:56.551671 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" podStartSLOduration=2.551636337 podStartE2EDuration="2.551636337s" podCreationTimestamp="2025-12-03 09:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:18:56.547187522 +0000 UTC m=+404.730079843" watchObservedRunningTime="2025-12-03 09:18:56.551636337 +0000 UTC m=+404.734528638" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.306307 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.307528 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" podUID="8b1e350e-226c-4245-a6ac-8111175a1777" containerName="route-controller-manager" containerID="cri-o://60062281ce609f7bc1bb34efa7a667064ac5c7f228a9172a1905a5468842d4c5" gracePeriod=30 Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.583349 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b1e350e-226c-4245-a6ac-8111175a1777" containerID="60062281ce609f7bc1bb34efa7a667064ac5c7f228a9172a1905a5468842d4c5" exitCode=0 Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.583394 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" event={"ID":"8b1e350e-226c-4245-a6ac-8111175a1777","Type":"ContainerDied","Data":"60062281ce609f7bc1bb34efa7a667064ac5c7f228a9172a1905a5468842d4c5"} Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.718245 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.798644 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca\") pod \"8b1e350e-226c-4245-a6ac-8111175a1777\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.798742 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6bv\" (UniqueName: \"kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv\") pod \"8b1e350e-226c-4245-a6ac-8111175a1777\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.798909 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert\") pod \"8b1e350e-226c-4245-a6ac-8111175a1777\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.798941 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config\") pod \"8b1e350e-226c-4245-a6ac-8111175a1777\" (UID: \"8b1e350e-226c-4245-a6ac-8111175a1777\") " Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.799438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b1e350e-226c-4245-a6ac-8111175a1777" (UID: "8b1e350e-226c-4245-a6ac-8111175a1777"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.801484 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config" (OuterVolumeSpecName: "config") pod "8b1e350e-226c-4245-a6ac-8111175a1777" (UID: "8b1e350e-226c-4245-a6ac-8111175a1777"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.808207 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b1e350e-226c-4245-a6ac-8111175a1777" (UID: "8b1e350e-226c-4245-a6ac-8111175a1777"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.808211 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv" (OuterVolumeSpecName: "kube-api-access-5r6bv") pod "8b1e350e-226c-4245-a6ac-8111175a1777" (UID: "8b1e350e-226c-4245-a6ac-8111175a1777"). InnerVolumeSpecName "kube-api-access-5r6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.900176 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.900222 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r6bv\" (UniqueName: \"kubernetes.io/projected/8b1e350e-226c-4245-a6ac-8111175a1777-kube-api-access-5r6bv\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.900235 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e350e-226c-4245-a6ac-8111175a1777-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:07 crc kubenswrapper[4856]: I1203 09:19:07.900245 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e350e-226c-4245-a6ac-8111175a1777-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.593244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" event={"ID":"8b1e350e-226c-4245-a6ac-8111175a1777","Type":"ContainerDied","Data":"2d37caee2f38655a710088cd3d41e1310e910656fb1816f9d1faa7c9c5bf73a2"} Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.593335 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.593362 4856 scope.go:117] "RemoveContainer" containerID="60062281ce609f7bc1bb34efa7a667064ac5c7f228a9172a1905a5468842d4c5" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.637166 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.643523 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b4bc89c55-frbdm"] Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.696513 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1e350e-226c-4245-a6ac-8111175a1777" path="/var/lib/kubelet/pods/8b1e350e-226c-4245-a6ac-8111175a1777/volumes" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.736266 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps"] Dec 03 09:19:08 crc kubenswrapper[4856]: E1203 09:19:08.736673 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e350e-226c-4245-a6ac-8111175a1777" containerName="route-controller-manager" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.736717 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e350e-226c-4245-a6ac-8111175a1777" containerName="route-controller-manager" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.736863 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1e350e-226c-4245-a6ac-8111175a1777" containerName="route-controller-manager" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.737461 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.740055 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.740255 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.740514 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.740613 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.740628 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.742742 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.751375 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps"] Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.815403 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-client-ca\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.815519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-config\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.815610 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee4516a-b5a5-4e75-a660-eb64f314f32f-serving-cert\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.815680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74rq6\" (UniqueName: \"kubernetes.io/projected/cee4516a-b5a5-4e75-a660-eb64f314f32f-kube-api-access-74rq6\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.917025 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-config\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.917137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee4516a-b5a5-4e75-a660-eb64f314f32f-serving-cert\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.917158 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74rq6\" (UniqueName: \"kubernetes.io/projected/cee4516a-b5a5-4e75-a660-eb64f314f32f-kube-api-access-74rq6\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.917212 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-client-ca\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.918512 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-client-ca\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.918568 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4516a-b5a5-4e75-a660-eb64f314f32f-config\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.923031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee4516a-b5a5-4e75-a660-eb64f314f32f-serving-cert\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:08 crc kubenswrapper[4856]: I1203 09:19:08.935147 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74rq6\" (UniqueName: \"kubernetes.io/projected/cee4516a-b5a5-4e75-a660-eb64f314f32f-kube-api-access-74rq6\") pod \"route-controller-manager-7d4ff77958-m7bps\" (UID: \"cee4516a-b5a5-4e75-a660-eb64f314f32f\") " pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:09 crc kubenswrapper[4856]: I1203 09:19:09.057674 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:09 crc kubenswrapper[4856]: I1203 09:19:09.486738 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps"] Dec 03 09:19:09 crc kubenswrapper[4856]: I1203 09:19:09.602904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" event={"ID":"cee4516a-b5a5-4e75-a660-eb64f314f32f","Type":"ContainerStarted","Data":"23eff452c41f53fa21f2ec712c415b4d24d36751c2c411517838db7ade5cb1e9"} Dec 03 09:19:10 crc kubenswrapper[4856]: I1203 09:19:10.610039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" event={"ID":"cee4516a-b5a5-4e75-a660-eb64f314f32f","Type":"ContainerStarted","Data":"d3e097b5a16b5a570714d28d1597d432d6035d6b5b23c2f1d197b7783be60642"} Dec 03 09:19:10 crc kubenswrapper[4856]: I1203 09:19:10.610653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:10 crc kubenswrapper[4856]: I1203 09:19:10.616202 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" Dec 03 09:19:10 crc kubenswrapper[4856]: I1203 09:19:10.635064 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d4ff77958-m7bps" podStartSLOduration=3.635036479 podStartE2EDuration="3.635036479s" podCreationTimestamp="2025-12-03 09:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:19:10.631390251 +0000 UTC m=+418.814282552" watchObservedRunningTime="2025-12-03 09:19:10.635036479 +0000 UTC m=+418.817928800" Dec 03 09:19:14 crc kubenswrapper[4856]: I1203 09:19:14.951503 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jv9fd" Dec 03 09:19:15 crc kubenswrapper[4856]: I1203 09:19:15.004113 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:19:22 crc kubenswrapper[4856]: I1203 09:19:22.759233 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:19:22 crc kubenswrapper[4856]: I1203 09:19:22.759695 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:19:22 crc kubenswrapper[4856]: I1203 09:19:22.759762 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:19:22 crc kubenswrapper[4856]: I1203 09:19:22.760583 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:19:22 crc kubenswrapper[4856]: I1203 09:19:22.760647 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa" gracePeriod=600 Dec 03 09:19:23 crc kubenswrapper[4856]: I1203 09:19:23.694467 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa" exitCode=0 Dec 03 09:19:23 crc kubenswrapper[4856]: I1203 09:19:23.694538 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa"} Dec 03 09:19:23 crc kubenswrapper[4856]: I1203 09:19:23.694958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46"} Dec 03 09:19:23 crc kubenswrapper[4856]: I1203 09:19:23.694991 4856 scope.go:117] "RemoveContainer" containerID="d90cc73596539b6c478b80b727423d0f10c600eb880b8f947bc246a5e9364360" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.055401 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" podUID="33cee223-4bd1-4769-b794-5607b6610b92" containerName="registry" containerID="cri-o://7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0" gracePeriod=30 Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.446581 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497013 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497109 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qg7p\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497291 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497545 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.497944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates\") pod \"33cee223-4bd1-4769-b794-5607b6610b92\" (UID: \"33cee223-4bd1-4769-b794-5607b6610b92\") " Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.498993 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.499133 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.505426 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.505463 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.506080 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.506247 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p" (OuterVolumeSpecName: "kube-api-access-8qg7p") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "kube-api-access-8qg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.509451 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.519944 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "33cee223-4bd1-4769-b794-5607b6610b92" (UID: "33cee223-4bd1-4769-b794-5607b6610b92"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599764 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599830 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33cee223-4bd1-4769-b794-5607b6610b92-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599840 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qg7p\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-kube-api-access-8qg7p\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599855 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599864 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33cee223-4bd1-4769-b794-5607b6610b92-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599873 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33cee223-4bd1-4769-b794-5607b6610b92-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.599883 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33cee223-4bd1-4769-b794-5607b6610b92-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.839603 4856 generic.go:334] "Generic (PLEG): container finished" podID="33cee223-4bd1-4769-b794-5607b6610b92" containerID="7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0" exitCode=0 Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.839679 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.839726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" event={"ID":"33cee223-4bd1-4769-b794-5607b6610b92","Type":"ContainerDied","Data":"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0"} Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.840375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7nllr" event={"ID":"33cee223-4bd1-4769-b794-5607b6610b92","Type":"ContainerDied","Data":"d556a85ce439e9a092aed89e4bca768a00da5f6c29a1ab635c6e007c1c58a437"} Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.840412 4856 scope.go:117] "RemoveContainer" containerID="7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.867076 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.872297 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7nllr"] Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.876112 4856 scope.go:117] "RemoveContainer" containerID="7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0" Dec 03 09:19:40 crc kubenswrapper[4856]: E1203 09:19:40.876627 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0\": container with ID starting with 7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0 not found: ID does not exist" containerID="7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0" Dec 03 09:19:40 crc kubenswrapper[4856]: I1203 09:19:40.876690 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0"} err="failed to get container status \"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0\": rpc error: code = NotFound desc = could not find container \"7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0\": container with ID starting with 7c6773689a2b16721d4a011bf7341fa8ce5957d15bae0ffd40661af1aae7bdf0 not found: ID does not exist" Dec 03 09:19:42 crc kubenswrapper[4856]: I1203 09:19:42.700455 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cee223-4bd1-4769-b794-5607b6610b92" path="/var/lib/kubelet/pods/33cee223-4bd1-4769-b794-5607b6610b92/volumes" Dec 03 09:21:52 crc kubenswrapper[4856]: I1203 09:21:52.759165 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:21:52 crc kubenswrapper[4856]: I1203 09:21:52.759762 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.492038 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x7sf7"] Dec 03 09:22:09 crc kubenswrapper[4856]: E1203 09:22:09.493030 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cee223-4bd1-4769-b794-5607b6610b92" containerName="registry" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.493049 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cee223-4bd1-4769-b794-5607b6610b92" containerName="registry" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.493183 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cee223-4bd1-4769-b794-5607b6610b92" containerName="registry" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.493635 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.496725 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.496918 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jhcvj" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.497213 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.531647 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x7sf7"] Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.554387 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-9rpmm"] Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.555140 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-9rpmm" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.557401 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-f69g4" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.564651 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-84d4p"] Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.565550 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.567866 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zlgmb" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.574000 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-9rpmm"] Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.575876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vl4w\" (UniqueName: \"kubernetes.io/projected/d69625a4-8ce2-415d-ae2f-e0b5e3e63c96-kube-api-access-6vl4w\") pod \"cert-manager-cainjector-7f985d654d-x7sf7\" (UID: \"d69625a4-8ce2-415d-ae2f-e0b5e3e63c96\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.575996 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkkx\" (UniqueName: \"kubernetes.io/projected/1f803911-3cdc-40bf-8849-0a94fdf62f5c-kube-api-access-vkkkx\") pod \"cert-manager-5b446d88c5-9rpmm\" (UID: \"1f803911-3cdc-40bf-8849-0a94fdf62f5c\") " pod="cert-manager/cert-manager-5b446d88c5-9rpmm" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.576119 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfv7p\" (UniqueName: \"kubernetes.io/projected/a0555a5a-1ddc-46ae-b98a-7e4baa736e35-kube-api-access-wfv7p\") pod \"cert-manager-webhook-5655c58dd6-84d4p\" (UID: \"a0555a5a-1ddc-46ae-b98a-7e4baa736e35\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.577470 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-84d4p"] Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.677652 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vl4w\" (UniqueName: \"kubernetes.io/projected/d69625a4-8ce2-415d-ae2f-e0b5e3e63c96-kube-api-access-6vl4w\") pod \"cert-manager-cainjector-7f985d654d-x7sf7\" (UID: \"d69625a4-8ce2-415d-ae2f-e0b5e3e63c96\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.677937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkkx\" (UniqueName: \"kubernetes.io/projected/1f803911-3cdc-40bf-8849-0a94fdf62f5c-kube-api-access-vkkkx\") pod \"cert-manager-5b446d88c5-9rpmm\" (UID: \"1f803911-3cdc-40bf-8849-0a94fdf62f5c\") " pod="cert-manager/cert-manager-5b446d88c5-9rpmm" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.678044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfv7p\" (UniqueName: \"kubernetes.io/projected/a0555a5a-1ddc-46ae-b98a-7e4baa736e35-kube-api-access-wfv7p\") pod \"cert-manager-webhook-5655c58dd6-84d4p\" (UID: \"a0555a5a-1ddc-46ae-b98a-7e4baa736e35\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.698594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vl4w\" (UniqueName: \"kubernetes.io/projected/d69625a4-8ce2-415d-ae2f-e0b5e3e63c96-kube-api-access-6vl4w\") pod \"cert-manager-cainjector-7f985d654d-x7sf7\" (UID: \"d69625a4-8ce2-415d-ae2f-e0b5e3e63c96\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.698864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkkx\" (UniqueName: \"kubernetes.io/projected/1f803911-3cdc-40bf-8849-0a94fdf62f5c-kube-api-access-vkkkx\") pod \"cert-manager-5b446d88c5-9rpmm\" (UID: \"1f803911-3cdc-40bf-8849-0a94fdf62f5c\") " pod="cert-manager/cert-manager-5b446d88c5-9rpmm" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.705860 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfv7p\" (UniqueName: \"kubernetes.io/projected/a0555a5a-1ddc-46ae-b98a-7e4baa736e35-kube-api-access-wfv7p\") pod \"cert-manager-webhook-5655c58dd6-84d4p\" (UID: \"a0555a5a-1ddc-46ae-b98a-7e4baa736e35\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.816934 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.870053 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-9rpmm" Dec 03 09:22:09 crc kubenswrapper[4856]: I1203 09:22:09.879307 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.082069 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x7sf7"] Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.091273 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.336249 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-84d4p"] Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.345891 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-9rpmm"] Dec 03 09:22:10 crc kubenswrapper[4856]: W1203 09:22:10.350289 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f803911_3cdc_40bf_8849_0a94fdf62f5c.slice/crio-943fd23be4120377c6e1eded08a921f6e47171f3562c7f064323d1cf27d01667 WatchSource:0}: Error finding container 943fd23be4120377c6e1eded08a921f6e47171f3562c7f064323d1cf27d01667: Status 404 returned error can't find the container with id 943fd23be4120377c6e1eded08a921f6e47171f3562c7f064323d1cf27d01667 Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.755946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-9rpmm" event={"ID":"1f803911-3cdc-40bf-8849-0a94fdf62f5c","Type":"ContainerStarted","Data":"943fd23be4120377c6e1eded08a921f6e47171f3562c7f064323d1cf27d01667"} Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.757424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" event={"ID":"a0555a5a-1ddc-46ae-b98a-7e4baa736e35","Type":"ContainerStarted","Data":"2430392b8b9558130d7cb9874a692356f1c1e083d065f2a76c2148e055dfdb28"} Dec 03 09:22:10 crc kubenswrapper[4856]: I1203 09:22:10.759169 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" event={"ID":"d69625a4-8ce2-415d-ae2f-e0b5e3e63c96","Type":"ContainerStarted","Data":"635e371e77915b2502369f5894fc8fe74fe713f4a2822a1fa3b4a376c5969b21"} Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.783193 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-9rpmm" event={"ID":"1f803911-3cdc-40bf-8849-0a94fdf62f5c","Type":"ContainerStarted","Data":"e20aba4728a29572be49cf244fbeb502d04ebb680858876a23a7dc10fd78a075"} Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.786107 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" event={"ID":"a0555a5a-1ddc-46ae-b98a-7e4baa736e35","Type":"ContainerStarted","Data":"4d237c1954a142993502a1ed843a13a66a49caf09a64cf9661499582c70d88aa"} Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.786274 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.787619 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" event={"ID":"d69625a4-8ce2-415d-ae2f-e0b5e3e63c96","Type":"ContainerStarted","Data":"cb7f1b99a5966f75c5833b333dbd529b487fae05db1e8ff156243f58568d7ae8"} Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.815461 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-x7sf7" podStartSLOduration=1.787761024 podStartE2EDuration="4.815435441s" podCreationTimestamp="2025-12-03 09:22:09 +0000 UTC" firstStartedPulling="2025-12-03 09:22:10.090956252 +0000 UTC m=+598.273848553" lastFinishedPulling="2025-12-03 09:22:13.118630669 +0000 UTC m=+601.301522970" observedRunningTime="2025-12-03 09:22:13.812626479 +0000 UTC m=+601.995518790" watchObservedRunningTime="2025-12-03 09:22:13.815435441 +0000 UTC m=+601.998327742" Dec 03 09:22:13 crc kubenswrapper[4856]: I1203 09:22:13.815875 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-9rpmm" podStartSLOduration=2.024310978 podStartE2EDuration="4.815870402s" podCreationTimestamp="2025-12-03 09:22:09 +0000 UTC" firstStartedPulling="2025-12-03 09:22:10.353822567 +0000 UTC m=+598.536714868" lastFinishedPulling="2025-12-03 09:22:13.145382001 +0000 UTC m=+601.328274292" observedRunningTime="2025-12-03 09:22:13.800931006 +0000 UTC m=+601.983823347" watchObservedRunningTime="2025-12-03 09:22:13.815870402 +0000 UTC m=+601.998762703" Dec 03 09:22:19 crc kubenswrapper[4856]: I1203 09:22:19.883973 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" Dec 03 09:22:19 crc kubenswrapper[4856]: I1203 09:22:19.905949 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-84d4p" podStartSLOduration=8.111135394 podStartE2EDuration="10.905917241s" podCreationTimestamp="2025-12-03 09:22:09 +0000 UTC" firstStartedPulling="2025-12-03 09:22:10.337516026 +0000 UTC m=+598.520408327" lastFinishedPulling="2025-12-03 09:22:13.132297883 +0000 UTC m=+601.315190174" observedRunningTime="2025-12-03 09:22:13.837262785 +0000 UTC m=+602.020155086" watchObservedRunningTime="2025-12-03 09:22:19.905917241 +0000 UTC m=+608.088809542" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.205087 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h2mjf"] Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206306 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-controller" containerID="cri-o://c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206462 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="northd" containerID="cri-o://6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206500 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="sbdb" containerID="cri-o://aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206556 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-acl-logging" containerID="cri-o://c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206584 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-node" containerID="cri-o://323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206445 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.206390 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="nbdb" containerID="cri-o://802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.234421 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" containerID="cri-o://b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" gracePeriod=30 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.494685 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/3.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.497563 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovn-acl-logging/0.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.498164 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovn-controller/0.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.498648 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527689 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527752 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527783 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527788 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527853 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527872 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527917 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527968 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash" (OuterVolumeSpecName: "host-slash") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527999 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.527988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528089 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528096 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528125 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket" (OuterVolumeSpecName: "log-socket") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528231 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528247 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528271 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528245 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528304 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcgsg\" (UniqueName: \"kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528232 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528275 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528328 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528362 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528371 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528396 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528430 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log\") pod \"0244a363-96f5-4b97-824c-b62d42ecee2b\" (UID: \"0244a363-96f5-4b97-824c-b62d42ecee2b\") " Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528486 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528511 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log" (OuterVolumeSpecName: "node-log") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528874 4856 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528891 4856 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528906 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528918 4856 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528932 4856 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528948 4856 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528960 4856 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528884 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528972 4856 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.529052 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.529070 4856 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.529087 4856 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.529099 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.530699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.528517 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.536339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg" (OuterVolumeSpecName: "kube-api-access-vcgsg") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "kube-api-access-vcgsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.539418 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.552240 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0244a363-96f5-4b97-824c-b62d42ecee2b" (UID: "0244a363-96f5-4b97-824c-b62d42ecee2b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.561509 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jjjqm"] Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.562122 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="northd" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.562216 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="northd" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.562278 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="sbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.562354 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="sbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.562447 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kubecfg-setup" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.562520 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kubecfg-setup" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.562634 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.562814 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563008 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563110 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563187 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="nbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563239 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="nbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563327 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-acl-logging" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563403 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-acl-logging" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563502 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563594 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563677 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563748 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.563840 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.563935 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.564022 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-node" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564089 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-node" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564284 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564375 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="sbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564483 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="northd" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564601 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-acl-logging" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564694 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564784 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564891 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="kube-rbac-proxy-node" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.564982 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="nbdb" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565067 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565144 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565218 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovn-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565296 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.565484 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565569 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.565661 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.565735 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerName="ovnkube-controller" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.569620 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.630863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-bin\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.630924 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-log-socket\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.630957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-ovn\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631183 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46c4267c-71a9-4619-83b2-aa4d14331e49-ovn-node-metrics-cert\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631289 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-etc-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631350 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-systemd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631428 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-var-lib-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631530 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-node-log\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631613 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-netd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-kubelet\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631766 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jttz\" (UniqueName: \"kubernetes.io/projected/46c4267c-71a9-4619-83b2-aa4d14331e49-kube-api-access-6jttz\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631887 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-script-lib\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-slash\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.631965 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-netns\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632002 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-env-overrides\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632028 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-config\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-systemd-units\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632072 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632144 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632155 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632166 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcgsg\" (UniqueName: \"kubernetes.io/projected/0244a363-96f5-4b97-824c-b62d42ecee2b-kube-api-access-vcgsg\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632176 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632186 4856 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632199 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0244a363-96f5-4b97-824c-b62d42ecee2b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632209 4856 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0244a363-96f5-4b97-824c-b62d42ecee2b-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.632218 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0244a363-96f5-4b97-824c-b62d42ecee2b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.733849 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-script-lib\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.733908 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-slash\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.733932 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-netns\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.733966 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-env-overrides\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.733987 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-config\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734023 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-systemd-units\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-bin\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-log-socket\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734086 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-ovn\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734081 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-netns\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-ovn\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734828 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-log-socket\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-slash\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46c4267c-71a9-4619-83b2-aa4d14331e49-ovn-node-metrics-cert\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734921 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-env-overrides\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.734900 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-systemd-units\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-etc-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-systemd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735174 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735203 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-var-lib-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-node-log\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-netd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735044 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-etc-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735401 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-config\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735429 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-node-log\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735387 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-systemd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735427 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-var-lib-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735492 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-netd\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735458 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735597 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-kubelet\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-run-openvswitch\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735633 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jttz\" (UniqueName: \"kubernetes.io/projected/46c4267c-71a9-4619-83b2-aa4d14331e49-kube-api-access-6jttz\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.735633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-kubelet\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.736097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46c4267c-71a9-4619-83b2-aa4d14331e49-ovnkube-script-lib\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.736269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46c4267c-71a9-4619-83b2-aa4d14331e49-host-cni-bin\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.737623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46c4267c-71a9-4619-83b2-aa4d14331e49-ovn-node-metrics-cert\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.752975 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jttz\" (UniqueName: \"kubernetes.io/projected/46c4267c-71a9-4619-83b2-aa4d14331e49-kube-api-access-6jttz\") pod \"ovnkube-node-jjjqm\" (UID: \"46c4267c-71a9-4619-83b2-aa4d14331e49\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.845374 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/2.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.846082 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/1.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.846303 4856 generic.go:334] "Generic (PLEG): container finished" podID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" containerID="b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06" exitCode=2 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.846392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerDied","Data":"b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.846493 4856 scope.go:117] "RemoveContainer" containerID="b3d6ceb9967723804424bfe35b922e7bbad7d2e1586e1e343569a9a04eebd440" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.847896 4856 scope.go:117] "RemoveContainer" containerID="b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06" Dec 03 09:22:20 crc kubenswrapper[4856]: E1203 09:22:20.848589 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zpk2l_openshift-multus(29870646-4fde-4ebe-a3a9-0ef904f1bbaa)\"" pod="openshift-multus/multus-zpk2l" podUID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.849434 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovnkube-controller/3.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.852775 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovn-acl-logging/0.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.853575 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h2mjf_0244a363-96f5-4b97-824c-b62d42ecee2b/ovn-controller/0.log" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854143 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854181 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854193 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854205 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854216 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854226 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" exitCode=0 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854239 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" exitCode=143 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854288 4856 generic.go:334] "Generic (PLEG): container finished" podID="0244a363-96f5-4b97-824c-b62d42ecee2b" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" exitCode=143 Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854365 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854384 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854398 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854436 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854450 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854458 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854465 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854474 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854481 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854490 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854497 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854504 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854512 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854521 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854533 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854544 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854553 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854562 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854570 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854613 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854623 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854630 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854638 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854645 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854668 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854677 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854685 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854693 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854701 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854709 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854716 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854724 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854732 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854739 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854749 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" event={"ID":"0244a363-96f5-4b97-824c-b62d42ecee2b","Type":"ContainerDied","Data":"456a4383a5ce3e76e7e0a9b25d3a52e3a7e25053f11380dee6e30351f5fddd9c"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854759 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854768 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854776 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854785 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854792 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854820 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854829 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854836 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854843 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854851 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.854986 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h2mjf" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.888554 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.890923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h2mjf"] Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.893950 4856 scope.go:117] "RemoveContainer" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.894344 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h2mjf"] Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.921285 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:20 crc kubenswrapper[4856]: W1203 09:22:20.929862 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c4267c_71a9_4619_83b2_aa4d14331e49.slice/crio-ef9a9a7565b619b24fb9a2871773c8ab8a1d51de7ee5b3cd58983793bd674c5f WatchSource:0}: Error finding container ef9a9a7565b619b24fb9a2871773c8ab8a1d51de7ee5b3cd58983793bd674c5f: Status 404 returned error can't find the container with id ef9a9a7565b619b24fb9a2871773c8ab8a1d51de7ee5b3cd58983793bd674c5f Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.941552 4856 scope.go:117] "RemoveContainer" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.972633 4856 scope.go:117] "RemoveContainer" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:20 crc kubenswrapper[4856]: I1203 09:22:20.994795 4856 scope.go:117] "RemoveContainer" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.019039 4856 scope.go:117] "RemoveContainer" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.043006 4856 scope.go:117] "RemoveContainer" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.061880 4856 scope.go:117] "RemoveContainer" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.078793 4856 scope.go:117] "RemoveContainer" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.105511 4856 scope.go:117] "RemoveContainer" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.163743 4856 scope.go:117] "RemoveContainer" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.167841 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": container with ID starting with b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e not found: ID does not exist" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.167929 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} err="failed to get container status \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": rpc error: code = NotFound desc = could not find container \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": container with ID starting with b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.167977 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.168624 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": container with ID starting with 94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2 not found: ID does not exist" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.168647 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} err="failed to get container status \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": rpc error: code = NotFound desc = could not find container \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": container with ID starting with 94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.168682 4856 scope.go:117] "RemoveContainer" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.169120 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": container with ID starting with aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1 not found: ID does not exist" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169149 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} err="failed to get container status \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": rpc error: code = NotFound desc = could not find container \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": container with ID starting with aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169168 4856 scope.go:117] "RemoveContainer" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.169419 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": container with ID starting with 802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d not found: ID does not exist" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169457 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} err="failed to get container status \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": rpc error: code = NotFound desc = could not find container \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": container with ID starting with 802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169476 4856 scope.go:117] "RemoveContainer" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.169793 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": container with ID starting with 6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612 not found: ID does not exist" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169832 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} err="failed to get container status \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": rpc error: code = NotFound desc = could not find container \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": container with ID starting with 6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.169849 4856 scope.go:117] "RemoveContainer" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.170217 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": container with ID starting with 66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16 not found: ID does not exist" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.170237 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} err="failed to get container status \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": rpc error: code = NotFound desc = could not find container \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": container with ID starting with 66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.170254 4856 scope.go:117] "RemoveContainer" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.170563 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": container with ID starting with 323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34 not found: ID does not exist" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.170587 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} err="failed to get container status \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": rpc error: code = NotFound desc = could not find container \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": container with ID starting with 323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.170601 4856 scope.go:117] "RemoveContainer" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.170982 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": container with ID starting with c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171 not found: ID does not exist" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.171001 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} err="failed to get container status \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": rpc error: code = NotFound desc = could not find container \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": container with ID starting with c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.171014 4856 scope.go:117] "RemoveContainer" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.171256 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": container with ID starting with c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960 not found: ID does not exist" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.171278 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} err="failed to get container status \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": rpc error: code = NotFound desc = could not find container \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": container with ID starting with c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.171292 4856 scope.go:117] "RemoveContainer" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: E1203 09:22:21.172396 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": container with ID starting with aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819 not found: ID does not exist" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.172444 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} err="failed to get container status \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": rpc error: code = NotFound desc = could not find container \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": container with ID starting with aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.172458 4856 scope.go:117] "RemoveContainer" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.172990 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} err="failed to get container status \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": rpc error: code = NotFound desc = could not find container \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": container with ID starting with b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.173011 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176242 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} err="failed to get container status \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": rpc error: code = NotFound desc = could not find container \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": container with ID starting with 94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176269 4856 scope.go:117] "RemoveContainer" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176590 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} err="failed to get container status \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": rpc error: code = NotFound desc = could not find container \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": container with ID starting with aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176610 4856 scope.go:117] "RemoveContainer" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176925 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} err="failed to get container status \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": rpc error: code = NotFound desc = could not find container \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": container with ID starting with 802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.176985 4856 scope.go:117] "RemoveContainer" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.178349 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} err="failed to get container status \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": rpc error: code = NotFound desc = could not find container \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": container with ID starting with 6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.178390 4856 scope.go:117] "RemoveContainer" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.178643 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} err="failed to get container status \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": rpc error: code = NotFound desc = could not find container \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": container with ID starting with 66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.178663 4856 scope.go:117] "RemoveContainer" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.179551 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} err="failed to get container status \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": rpc error: code = NotFound desc = could not find container \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": container with ID starting with 323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.179797 4856 scope.go:117] "RemoveContainer" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.180604 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} err="failed to get container status \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": rpc error: code = NotFound desc = could not find container \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": container with ID starting with c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.180627 4856 scope.go:117] "RemoveContainer" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.182221 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} err="failed to get container status \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": rpc error: code = NotFound desc = could not find container \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": container with ID starting with c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.182331 4856 scope.go:117] "RemoveContainer" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.183361 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} err="failed to get container status \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": rpc error: code = NotFound desc = could not find container \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": container with ID starting with aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.183431 4856 scope.go:117] "RemoveContainer" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.185361 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} err="failed to get container status \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": rpc error: code = NotFound desc = could not find container \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": container with ID starting with b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.185397 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.185993 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} err="failed to get container status \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": rpc error: code = NotFound desc = could not find container \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": container with ID starting with 94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.186025 4856 scope.go:117] "RemoveContainer" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.186459 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} err="failed to get container status \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": rpc error: code = NotFound desc = could not find container \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": container with ID starting with aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.186483 4856 scope.go:117] "RemoveContainer" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.186909 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} err="failed to get container status \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": rpc error: code = NotFound desc = could not find container \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": container with ID starting with 802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.186952 4856 scope.go:117] "RemoveContainer" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.187338 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} err="failed to get container status \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": rpc error: code = NotFound desc = could not find container \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": container with ID starting with 6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.187365 4856 scope.go:117] "RemoveContainer" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.187912 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} err="failed to get container status \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": rpc error: code = NotFound desc = could not find container \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": container with ID starting with 66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.187987 4856 scope.go:117] "RemoveContainer" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.188472 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} err="failed to get container status \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": rpc error: code = NotFound desc = could not find container \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": container with ID starting with 323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.188500 4856 scope.go:117] "RemoveContainer" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.189747 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} err="failed to get container status \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": rpc error: code = NotFound desc = could not find container \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": container with ID starting with c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.189788 4856 scope.go:117] "RemoveContainer" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.190870 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} err="failed to get container status \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": rpc error: code = NotFound desc = could not find container \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": container with ID starting with c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.190927 4856 scope.go:117] "RemoveContainer" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.191362 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} err="failed to get container status \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": rpc error: code = NotFound desc = could not find container \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": container with ID starting with aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.191389 4856 scope.go:117] "RemoveContainer" containerID="b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.192473 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e"} err="failed to get container status \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": rpc error: code = NotFound desc = could not find container \"b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e\": container with ID starting with b05be120fdf472d9dd00858b1222250e0c8827fb585ec4c9aa582a8d0e54691e not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.192614 4856 scope.go:117] "RemoveContainer" containerID="94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.193739 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2"} err="failed to get container status \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": rpc error: code = NotFound desc = could not find container \"94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2\": container with ID starting with 94852e0e239520decf531ec799b18ac3edc5f9229fe9dafe24943587072868d2 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.193776 4856 scope.go:117] "RemoveContainer" containerID="aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.195168 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1"} err="failed to get container status \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": rpc error: code = NotFound desc = could not find container \"aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1\": container with ID starting with aaf652ef6e211d7deec69aa163d3227dde3fc5c3d54d2b897ce31123080530a1 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.195201 4856 scope.go:117] "RemoveContainer" containerID="802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.195675 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d"} err="failed to get container status \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": rpc error: code = NotFound desc = could not find container \"802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d\": container with ID starting with 802477ff49918c9aabcaa1d9f285e5bbc2805f39961fb2cbba40a41f297c6d7d not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.195736 4856 scope.go:117] "RemoveContainer" containerID="6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196103 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612"} err="failed to get container status \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": rpc error: code = NotFound desc = could not find container \"6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612\": container with ID starting with 6ebb4ea7bd77207f3871619d831feecdf0ad95d27e41afb7f85f906b83c94612 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196152 4856 scope.go:117] "RemoveContainer" containerID="66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196453 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16"} err="failed to get container status \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": rpc error: code = NotFound desc = could not find container \"66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16\": container with ID starting with 66d2939d5fa2243218bc7916af8770e04132c0c38f3d260bda177caa2a899e16 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196478 4856 scope.go:117] "RemoveContainer" containerID="323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196709 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34"} err="failed to get container status \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": rpc error: code = NotFound desc = could not find container \"323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34\": container with ID starting with 323b97c051cb98b91b2df4d0f34bd086dfb122e4db8eb17fe2d47413f5629b34 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.196730 4856 scope.go:117] "RemoveContainer" containerID="c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.197157 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171"} err="failed to get container status \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": rpc error: code = NotFound desc = could not find container \"c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171\": container with ID starting with c6a1f1ac00c8ebfd320ba4778f03bf07c8dcb140d2f2b1fb75a3451b1654a171 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.197244 4856 scope.go:117] "RemoveContainer" containerID="c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.197668 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960"} err="failed to get container status \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": rpc error: code = NotFound desc = could not find container \"c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960\": container with ID starting with c076dc106f57c0dd9b121555c6bbe0b1999d1d1442068d821324e8cde631e960 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.197700 4856 scope.go:117] "RemoveContainer" containerID="aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.197992 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819"} err="failed to get container status \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": rpc error: code = NotFound desc = could not find container \"aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819\": container with ID starting with aaed33ff92a14357c0c0b53fd5c41011a41fc25e2ec597b48e0e16a12e161819 not found: ID does not exist" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.863973 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/2.log" Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.867341 4856 generic.go:334] "Generic (PLEG): container finished" podID="46c4267c-71a9-4619-83b2-aa4d14331e49" containerID="6074c98df16ddcabf0c38d9d1badd25450b03987aa6829696cb7588a10cbfe01" exitCode=0 Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.867411 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerDied","Data":"6074c98df16ddcabf0c38d9d1badd25450b03987aa6829696cb7588a10cbfe01"} Dec 03 09:22:21 crc kubenswrapper[4856]: I1203 09:22:21.867460 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"ef9a9a7565b619b24fb9a2871773c8ab8a1d51de7ee5b3cd58983793bd674c5f"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.697279 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0244a363-96f5-4b97-824c-b62d42ecee2b" path="/var/lib/kubelet/pods/0244a363-96f5-4b97-824c-b62d42ecee2b/volumes" Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.758768 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.758894 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878234 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"518c939e4f6bf0b16672eda0f7e907120fb44ab33083ba08ca6fa4aea67574c8"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878297 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"df2214c364195a9e8e3762c4522b1014e208388dd56a61dde5afcafe53bb67df"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"e7d77b9da59955762ca66b91fbfb953f2a1b3376a3d38bd98aa4135e1688ad5f"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"c46dbc9ccd7abea0c5ba636d0692d2e74269f3bb13baa1c815681683a32e3cee"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878332 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"fe9a651b377fe5e378ceb7d449c806b87ebe61aa74f63f3bd409ab000fe2d1e2"} Dec 03 09:22:22 crc kubenswrapper[4856]: I1203 09:22:22.878345 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"e6a492c560428ed2e61e06987bb78f012593eba2c2e5f117cf017c9ef5375a45"} Dec 03 09:22:24 crc kubenswrapper[4856]: I1203 09:22:24.892966 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"fa96a975862d34eeda560128e505ade61d5d83a442cf4f4a90e99a9cd389e846"} Dec 03 09:22:27 crc kubenswrapper[4856]: I1203 09:22:27.917509 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" event={"ID":"46c4267c-71a9-4619-83b2-aa4d14331e49","Type":"ContainerStarted","Data":"401cad92df50a911dc3c262ba87776db33b2792a871b805283febf57ee8cecf9"} Dec 03 09:22:27 crc kubenswrapper[4856]: I1203 09:22:27.918161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:27 crc kubenswrapper[4856]: I1203 09:22:27.918191 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:27 crc kubenswrapper[4856]: I1203 09:22:27.952399 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:27 crc kubenswrapper[4856]: I1203 09:22:27.956393 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" podStartSLOduration=7.956372876 podStartE2EDuration="7.956372876s" podCreationTimestamp="2025-12-03 09:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:22:27.952754474 +0000 UTC m=+616.135646785" watchObservedRunningTime="2025-12-03 09:22:27.956372876 +0000 UTC m=+616.139265177" Dec 03 09:22:28 crc kubenswrapper[4856]: I1203 09:22:28.924222 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:28 crc kubenswrapper[4856]: I1203 09:22:28.958390 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:32 crc kubenswrapper[4856]: I1203 09:22:32.692457 4856 scope.go:117] "RemoveContainer" containerID="b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06" Dec 03 09:22:32 crc kubenswrapper[4856]: E1203 09:22:32.693041 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zpk2l_openshift-multus(29870646-4fde-4ebe-a3a9-0ef904f1bbaa)\"" pod="openshift-multus/multus-zpk2l" podUID="29870646-4fde-4ebe-a3a9-0ef904f1bbaa" Dec 03 09:22:44 crc kubenswrapper[4856]: I1203 09:22:44.690683 4856 scope.go:117] "RemoveContainer" containerID="b847d2d7cf83fd2947a8e319e3e0a7ef3d973721089d0bf813a6eab492a07e06" Dec 03 09:22:46 crc kubenswrapper[4856]: I1203 09:22:46.030860 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zpk2l_29870646-4fde-4ebe-a3a9-0ef904f1bbaa/kube-multus/2.log" Dec 03 09:22:46 crc kubenswrapper[4856]: I1203 09:22:46.031792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zpk2l" event={"ID":"29870646-4fde-4ebe-a3a9-0ef904f1bbaa","Type":"ContainerStarted","Data":"593958e9b36ede245f7bac47bc6998baa74d2b3dc87809f48e6df30437012833"} Dec 03 09:22:50 crc kubenswrapper[4856]: I1203 09:22:50.925188 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjjqm" Dec 03 09:22:52 crc kubenswrapper[4856]: I1203 09:22:52.759391 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:22:52 crc kubenswrapper[4856]: I1203 09:22:52.760796 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:22:52 crc kubenswrapper[4856]: I1203 09:22:52.760955 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:22:52 crc kubenswrapper[4856]: I1203 09:22:52.761643 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:22:52 crc kubenswrapper[4856]: I1203 09:22:52.761788 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46" gracePeriod=600 Dec 03 09:22:53 crc kubenswrapper[4856]: I1203 09:22:53.078064 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46" exitCode=0 Dec 03 09:22:53 crc kubenswrapper[4856]: I1203 09:22:53.078269 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46"} Dec 03 09:22:53 crc kubenswrapper[4856]: I1203 09:22:53.078548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1"} Dec 03 09:22:53 crc kubenswrapper[4856]: I1203 09:22:53.078577 4856 scope.go:117] "RemoveContainer" containerID="b159be1ed78d05acaad24716561c365860c40f23ddf723c0a6779a751d335caa" Dec 03 09:22:58 crc kubenswrapper[4856]: I1203 09:22:58.971661 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd"] Dec 03 09:22:58 crc kubenswrapper[4856]: I1203 09:22:58.973655 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:58 crc kubenswrapper[4856]: I1203 09:22:58.975998 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 09:22:58 crc kubenswrapper[4856]: I1203 09:22:58.987848 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd"] Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.125027 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.125131 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.125246 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbq9\" (UniqueName: \"kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.227117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.227193 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.227236 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbq9\" (UniqueName: \"kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.227550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.227985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.251702 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbq9\" (UniqueName: \"kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.292494 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:22:59 crc kubenswrapper[4856]: I1203 09:22:59.502570 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd"] Dec 03 09:23:00 crc kubenswrapper[4856]: I1203 09:23:00.123921 4856 generic.go:334] "Generic (PLEG): container finished" podID="46751f58-5b43-408e-8722-5155ceba0ebb" containerID="ebd4c9f7f65bd08cc8165132ba61c8b82b05b27052fef96146b240b991bd8c4f" exitCode=0 Dec 03 09:23:00 crc kubenswrapper[4856]: I1203 09:23:00.123982 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" event={"ID":"46751f58-5b43-408e-8722-5155ceba0ebb","Type":"ContainerDied","Data":"ebd4c9f7f65bd08cc8165132ba61c8b82b05b27052fef96146b240b991bd8c4f"} Dec 03 09:23:00 crc kubenswrapper[4856]: I1203 09:23:00.124034 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" event={"ID":"46751f58-5b43-408e-8722-5155ceba0ebb","Type":"ContainerStarted","Data":"eaeea6b4d3fb028f29adc817635a05485c0aea44f6aacfe8ad8e3558463f87e4"} Dec 03 09:23:02 crc kubenswrapper[4856]: I1203 09:23:02.145905 4856 generic.go:334] "Generic (PLEG): container finished" podID="46751f58-5b43-408e-8722-5155ceba0ebb" containerID="3618270ccd7d6d078858e3f0b0881cab8c7ce81b6e88859082ba0af686d9ea74" exitCode=0 Dec 03 09:23:02 crc kubenswrapper[4856]: I1203 09:23:02.146008 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" event={"ID":"46751f58-5b43-408e-8722-5155ceba0ebb","Type":"ContainerDied","Data":"3618270ccd7d6d078858e3f0b0881cab8c7ce81b6e88859082ba0af686d9ea74"} Dec 03 09:23:03 crc kubenswrapper[4856]: I1203 09:23:03.152504 4856 generic.go:334] "Generic (PLEG): container finished" podID="46751f58-5b43-408e-8722-5155ceba0ebb" containerID="ee85caeb211c2ba49eb1b121074442c5aab19a37333cb6c1a47fac18750f2c6d" exitCode=0 Dec 03 09:23:03 crc kubenswrapper[4856]: I1203 09:23:03.152560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" event={"ID":"46751f58-5b43-408e-8722-5155ceba0ebb","Type":"ContainerDied","Data":"ee85caeb211c2ba49eb1b121074442c5aab19a37333cb6c1a47fac18750f2c6d"} Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.403164 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.501917 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle\") pod \"46751f58-5b43-408e-8722-5155ceba0ebb\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.502026 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbq9\" (UniqueName: \"kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9\") pod \"46751f58-5b43-408e-8722-5155ceba0ebb\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.502060 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util\") pod \"46751f58-5b43-408e-8722-5155ceba0ebb\" (UID: \"46751f58-5b43-408e-8722-5155ceba0ebb\") " Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.502734 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle" (OuterVolumeSpecName: "bundle") pod "46751f58-5b43-408e-8722-5155ceba0ebb" (UID: "46751f58-5b43-408e-8722-5155ceba0ebb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.510902 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9" (OuterVolumeSpecName: "kube-api-access-ppbq9") pod "46751f58-5b43-408e-8722-5155ceba0ebb" (UID: "46751f58-5b43-408e-8722-5155ceba0ebb"). InnerVolumeSpecName "kube-api-access-ppbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.516724 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util" (OuterVolumeSpecName: "util") pod "46751f58-5b43-408e-8722-5155ceba0ebb" (UID: "46751f58-5b43-408e-8722-5155ceba0ebb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.603918 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.603954 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbq9\" (UniqueName: \"kubernetes.io/projected/46751f58-5b43-408e-8722-5155ceba0ebb-kube-api-access-ppbq9\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:04 crc kubenswrapper[4856]: I1203 09:23:04.603967 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/46751f58-5b43-408e-8722-5155ceba0ebb-util\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:05 crc kubenswrapper[4856]: I1203 09:23:05.168987 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" event={"ID":"46751f58-5b43-408e-8722-5155ceba0ebb","Type":"ContainerDied","Data":"eaeea6b4d3fb028f29adc817635a05485c0aea44f6aacfe8ad8e3558463f87e4"} Dec 03 09:23:05 crc kubenswrapper[4856]: I1203 09:23:05.169062 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaeea6b4d3fb028f29adc817635a05485c0aea44f6aacfe8ad8e3558463f87e4" Dec 03 09:23:05 crc kubenswrapper[4856]: I1203 09:23:05.169083 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.632458 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-88642"] Dec 03 09:23:06 crc kubenswrapper[4856]: E1203 09:23:06.633026 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="util" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.633043 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="util" Dec 03 09:23:06 crc kubenswrapper[4856]: E1203 09:23:06.633055 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="extract" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.633062 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="extract" Dec 03 09:23:06 crc kubenswrapper[4856]: E1203 09:23:06.633082 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="pull" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.633090 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="pull" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.633210 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="46751f58-5b43-408e-8722-5155ceba0ebb" containerName="extract" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.633629 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.636627 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.636733 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gnrkw" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.636917 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.646931 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-88642"] Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.733005 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzr9q\" (UniqueName: \"kubernetes.io/projected/05257373-572f-4663-911d-8f50b368b390-kube-api-access-vzr9q\") pod \"nmstate-operator-5b5b58f5c8-88642\" (UID: \"05257373-572f-4663-911d-8f50b368b390\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.834425 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzr9q\" (UniqueName: \"kubernetes.io/projected/05257373-572f-4663-911d-8f50b368b390-kube-api-access-vzr9q\") pod \"nmstate-operator-5b5b58f5c8-88642\" (UID: \"05257373-572f-4663-911d-8f50b368b390\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.854574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzr9q\" (UniqueName: \"kubernetes.io/projected/05257373-572f-4663-911d-8f50b368b390-kube-api-access-vzr9q\") pod \"nmstate-operator-5b5b58f5c8-88642\" (UID: \"05257373-572f-4663-911d-8f50b368b390\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" Dec 03 09:23:06 crc kubenswrapper[4856]: I1203 09:23:06.953677 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" Dec 03 09:23:07 crc kubenswrapper[4856]: I1203 09:23:07.141706 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-88642"] Dec 03 09:23:07 crc kubenswrapper[4856]: I1203 09:23:07.183856 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" event={"ID":"05257373-572f-4663-911d-8f50b368b390","Type":"ContainerStarted","Data":"c9bacea50d8b18f122495847eab6bc9bbfc8d88d7c911aa9cd9bb7a865dcb7d8"} Dec 03 09:23:10 crc kubenswrapper[4856]: I1203 09:23:10.209293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" event={"ID":"05257373-572f-4663-911d-8f50b368b390","Type":"ContainerStarted","Data":"e5fe23d60aa27df4b6336d739535d0915c791b9ff55666d02126b878dab403ac"} Dec 03 09:23:10 crc kubenswrapper[4856]: I1203 09:23:10.236936 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-88642" podStartSLOduration=1.82296445 podStartE2EDuration="4.236916608s" podCreationTimestamp="2025-12-03 09:23:06 +0000 UTC" firstStartedPulling="2025-12-03 09:23:07.163780783 +0000 UTC m=+655.346673084" lastFinishedPulling="2025-12-03 09:23:09.577732941 +0000 UTC m=+657.760625242" observedRunningTime="2025-12-03 09:23:10.236062266 +0000 UTC m=+658.418954567" watchObservedRunningTime="2025-12-03 09:23:10.236916608 +0000 UTC m=+658.419808909" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.222225 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.223168 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.227320 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7jc8c" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.250650 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.251577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.254104 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.254298 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gwwkk"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.255050 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.261474 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.276729 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.303101 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qs2\" (UniqueName: \"kubernetes.io/projected/a981397a-976a-4fc4-8cb9-af2d72410121-kube-api-access-q8qs2\") pod \"nmstate-metrics-7f946cbc9-sckh8\" (UID: \"a981397a-976a-4fc4-8cb9-af2d72410121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405238 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-ovs-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405307 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-dbus-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405340 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e91cc63e-1f90-4427-942e-fe3645f8ee86-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405397 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8qs2\" (UniqueName: \"kubernetes.io/projected/a981397a-976a-4fc4-8cb9-af2d72410121-kube-api-access-q8qs2\") pod \"nmstate-metrics-7f946cbc9-sckh8\" (UID: \"a981397a-976a-4fc4-8cb9-af2d72410121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405423 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkd7\" (UniqueName: \"kubernetes.io/projected/df2e4dae-23c5-4f2a-978d-1e7293553f21-kube-api-access-8tkd7\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405460 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hxsh\" (UniqueName: \"kubernetes.io/projected/e91cc63e-1f90-4427-942e-fe3645f8ee86-kube-api-access-2hxsh\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.405513 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-nmstate-lock\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.426616 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.428022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.432543 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.432639 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.438574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8qs2\" (UniqueName: \"kubernetes.io/projected/a981397a-976a-4fc4-8cb9-af2d72410121-kube-api-access-q8qs2\") pod \"nmstate-metrics-7f946cbc9-sckh8\" (UID: \"a981397a-976a-4fc4-8cb9-af2d72410121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.440900 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.447980 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-64q6j" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-nmstate-lock\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvcl\" (UniqueName: \"kubernetes.io/projected/bca24c59-d3d7-42fb-a6b5-226bece344db-kube-api-access-bjvcl\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507453 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-ovs-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507483 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-dbus-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-nmstate-lock\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.507770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-ovs-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.508048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df2e4dae-23c5-4f2a-978d-1e7293553f21-dbus-socket\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.515592 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e91cc63e-1f90-4427-942e-fe3645f8ee86-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.520883 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e91cc63e-1f90-4427-942e-fe3645f8ee86-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.521046 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.521123 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tkd7\" (UniqueName: \"kubernetes.io/projected/df2e4dae-23c5-4f2a-978d-1e7293553f21-kube-api-access-8tkd7\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.521141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bca24c59-d3d7-42fb-a6b5-226bece344db-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.521196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hxsh\" (UniqueName: \"kubernetes.io/projected/e91cc63e-1f90-4427-942e-fe3645f8ee86-kube-api-access-2hxsh\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.540493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.544080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tkd7\" (UniqueName: \"kubernetes.io/projected/df2e4dae-23c5-4f2a-978d-1e7293553f21-kube-api-access-8tkd7\") pod \"nmstate-handler-gwwkk\" (UID: \"df2e4dae-23c5-4f2a-978d-1e7293553f21\") " pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.546078 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hxsh\" (UniqueName: \"kubernetes.io/projected/e91cc63e-1f90-4427-942e-fe3645f8ee86-kube-api-access-2hxsh\") pod \"nmstate-webhook-5f6d4c5ccb-vw9qx\" (UID: \"e91cc63e-1f90-4427-942e-fe3645f8ee86\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.571155 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.580940 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.599368 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84b86b69b6-ml69w"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.600189 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.610078 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84b86b69b6-ml69w"] Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.622717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvcl\" (UniqueName: \"kubernetes.io/projected/bca24c59-d3d7-42fb-a6b5-226bece344db-kube-api-access-bjvcl\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.622776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.622823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bca24c59-d3d7-42fb-a6b5-226bece344db-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.623912 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bca24c59-d3d7-42fb-a6b5-226bece344db-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: E1203 09:23:11.623989 4856 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 03 09:23:11 crc kubenswrapper[4856]: E1203 09:23:11.624029 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert podName:bca24c59-d3d7-42fb-a6b5-226bece344db nodeName:}" failed. No retries permitted until 2025-12-03 09:23:12.124014332 +0000 UTC m=+660.306906633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-2ps5x" (UID: "bca24c59-d3d7-42fb-a6b5-226bece344db") : secret "plugin-serving-cert" not found Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.657505 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvcl\" (UniqueName: \"kubernetes.io/projected/bca24c59-d3d7-42fb-a6b5-226bece344db-kube-api-access-bjvcl\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.723896 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjpl\" (UniqueName: \"kubernetes.io/projected/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-kube-api-access-qpjpl\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.723950 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-service-ca\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.723991 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-oauth-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.724015 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.724034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-trusted-ca-bundle\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.724067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.724084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-oauth-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.803046 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8"] Dec 03 09:23:11 crc kubenswrapper[4856]: W1203 09:23:11.813029 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda981397a_976a_4fc4_8cb9_af2d72410121.slice/crio-0f767933cf30e57e6fb01aa81081963ad81cbddce2c5c36d80b85cef5a08efc7 WatchSource:0}: Error finding container 0f767933cf30e57e6fb01aa81081963ad81cbddce2c5c36d80b85cef5a08efc7: Status 404 returned error can't find the container with id 0f767933cf30e57e6fb01aa81081963ad81cbddce2c5c36d80b85cef5a08efc7 Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825445 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-trusted-ca-bundle\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825561 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825588 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-oauth-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjpl\" (UniqueName: \"kubernetes.io/projected/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-kube-api-access-qpjpl\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825690 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-service-ca\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.825789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-oauth-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.826907 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-oauth-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.828194 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-trusted-ca-bundle\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.830240 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.830886 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-service-ca\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.833635 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-serving-cert\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.835992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-console-oauth-config\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.851743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjpl\" (UniqueName: \"kubernetes.io/projected/24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4-kube-api-access-qpjpl\") pod \"console-84b86b69b6-ml69w\" (UID: \"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4\") " pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.854226 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx"] Dec 03 09:23:11 crc kubenswrapper[4856]: W1203 09:23:11.861162 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode91cc63e_1f90_4427_942e_fe3645f8ee86.slice/crio-e4a5b73eac0db2bb32e5c881f04a528332effe6a9f6b4ac409bcd418ba101008 WatchSource:0}: Error finding container e4a5b73eac0db2bb32e5c881f04a528332effe6a9f6b4ac409bcd418ba101008: Status 404 returned error can't find the container with id e4a5b73eac0db2bb32e5c881f04a528332effe6a9f6b4ac409bcd418ba101008 Dec 03 09:23:11 crc kubenswrapper[4856]: I1203 09:23:11.926335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.086023 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84b86b69b6-ml69w"] Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.128394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.132596 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca24c59-d3d7-42fb-a6b5-226bece344db-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2ps5x\" (UID: \"bca24c59-d3d7-42fb-a6b5-226bece344db\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.224767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gwwkk" event={"ID":"df2e4dae-23c5-4f2a-978d-1e7293553f21","Type":"ContainerStarted","Data":"2983b3b64164c8a67d25d217033e82e8b21fa73d1c55d09103f7bdc2039a3d42"} Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.226571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b86b69b6-ml69w" event={"ID":"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4","Type":"ContainerStarted","Data":"770d28bf6cab42b86bb4eebe62894a1d497cfbe92ca36e29d80f3c1d62a3224e"} Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.228430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" event={"ID":"a981397a-976a-4fc4-8cb9-af2d72410121","Type":"ContainerStarted","Data":"0f767933cf30e57e6fb01aa81081963ad81cbddce2c5c36d80b85cef5a08efc7"} Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.229614 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" event={"ID":"e91cc63e-1f90-4427-942e-fe3645f8ee86","Type":"ContainerStarted","Data":"e4a5b73eac0db2bb32e5c881f04a528332effe6a9f6b4ac409bcd418ba101008"} Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.383981 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-64q6j" Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.392560 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" Dec 03 09:23:12 crc kubenswrapper[4856]: I1203 09:23:12.566550 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x"] Dec 03 09:23:13 crc kubenswrapper[4856]: I1203 09:23:13.237639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b86b69b6-ml69w" event={"ID":"24ece8b4-3c6b-4b0c-b8d4-d870d394c3e4","Type":"ContainerStarted","Data":"92e0f924105f904f67e2490522b78507055ec1542857278f59d852ba8e1fd389"} Dec 03 09:23:13 crc kubenswrapper[4856]: I1203 09:23:13.239767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" event={"ID":"bca24c59-d3d7-42fb-a6b5-226bece344db","Type":"ContainerStarted","Data":"d1cfe570da65a9a3d71463df79569ed8281b949ddca89d894f59bd8e1f56f03b"} Dec 03 09:23:13 crc kubenswrapper[4856]: I1203 09:23:13.260511 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84b86b69b6-ml69w" podStartSLOduration=2.260491842 podStartE2EDuration="2.260491842s" podCreationTimestamp="2025-12-03 09:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:23:13.256000676 +0000 UTC m=+661.438892977" watchObservedRunningTime="2025-12-03 09:23:13.260491842 +0000 UTC m=+661.443384143" Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.254910 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" event={"ID":"bca24c59-d3d7-42fb-a6b5-226bece344db","Type":"ContainerStarted","Data":"ebe3817f3c047cdaa996dab6a1b42b625d47f2cabf4a49886b0382486d2d40ab"} Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.256876 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" event={"ID":"e91cc63e-1f90-4427-942e-fe3645f8ee86","Type":"ContainerStarted","Data":"0a9c27a2001d87d7e2d0c58b2fb756dd68897ce4d4dfdda2b49bc44ee3fec447"} Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.257006 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.258927 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gwwkk" event={"ID":"df2e4dae-23c5-4f2a-978d-1e7293553f21","Type":"ContainerStarted","Data":"113b765c874777b6019e3a14ad669aefb59528b775a39255f25c6a68580aa1ad"} Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.259029 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.260213 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" event={"ID":"a981397a-976a-4fc4-8cb9-af2d72410121","Type":"ContainerStarted","Data":"8fda91d9d837e33ea994f90a854550fbf477d28d13630538890319a29ce4aed3"} Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.269148 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2ps5x" podStartSLOduration=1.8736636679999998 podStartE2EDuration="4.269133854s" podCreationTimestamp="2025-12-03 09:23:11 +0000 UTC" firstStartedPulling="2025-12-03 09:23:12.581637653 +0000 UTC m=+660.764529954" lastFinishedPulling="2025-12-03 09:23:14.977107839 +0000 UTC m=+663.160000140" observedRunningTime="2025-12-03 09:23:15.267200574 +0000 UTC m=+663.450092875" watchObservedRunningTime="2025-12-03 09:23:15.269133854 +0000 UTC m=+663.452026165" Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.290693 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" podStartSLOduration=2.044457225 podStartE2EDuration="4.290674915s" podCreationTimestamp="2025-12-03 09:23:11 +0000 UTC" firstStartedPulling="2025-12-03 09:23:11.863492539 +0000 UTC m=+660.046384840" lastFinishedPulling="2025-12-03 09:23:14.109710229 +0000 UTC m=+662.292602530" observedRunningTime="2025-12-03 09:23:15.287560264 +0000 UTC m=+663.470452565" watchObservedRunningTime="2025-12-03 09:23:15.290674915 +0000 UTC m=+663.473567216" Dec 03 09:23:15 crc kubenswrapper[4856]: I1203 09:23:15.311988 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gwwkk" podStartSLOduration=1.8551195150000002 podStartE2EDuration="4.31196585s" podCreationTimestamp="2025-12-03 09:23:11 +0000 UTC" firstStartedPulling="2025-12-03 09:23:11.664113587 +0000 UTC m=+659.847005888" lastFinishedPulling="2025-12-03 09:23:14.120959922 +0000 UTC m=+662.303852223" observedRunningTime="2025-12-03 09:23:15.307606996 +0000 UTC m=+663.490499297" watchObservedRunningTime="2025-12-03 09:23:15.31196585 +0000 UTC m=+663.494858171" Dec 03 09:23:17 crc kubenswrapper[4856]: I1203 09:23:17.276728 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" event={"ID":"a981397a-976a-4fc4-8cb9-af2d72410121","Type":"ContainerStarted","Data":"7fe4e771f9d8f4080266524a24bade1144a68fe7230141c70b0df72e653c432d"} Dec 03 09:23:17 crc kubenswrapper[4856]: I1203 09:23:17.301114 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sckh8" podStartSLOduration=1.721038403 podStartE2EDuration="6.301083744s" podCreationTimestamp="2025-12-03 09:23:11 +0000 UTC" firstStartedPulling="2025-12-03 09:23:11.816785013 +0000 UTC m=+659.999677304" lastFinishedPulling="2025-12-03 09:23:16.396830344 +0000 UTC m=+664.579722645" observedRunningTime="2025-12-03 09:23:17.299232296 +0000 UTC m=+665.482124637" watchObservedRunningTime="2025-12-03 09:23:17.301083744 +0000 UTC m=+665.483976085" Dec 03 09:23:21 crc kubenswrapper[4856]: I1203 09:23:21.602254 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gwwkk" Dec 03 09:23:21 crc kubenswrapper[4856]: I1203 09:23:21.927152 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:21 crc kubenswrapper[4856]: I1203 09:23:21.927274 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:21 crc kubenswrapper[4856]: I1203 09:23:21.933683 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:22 crc kubenswrapper[4856]: I1203 09:23:22.309729 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84b86b69b6-ml69w" Dec 03 09:23:22 crc kubenswrapper[4856]: I1203 09:23:22.354479 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:23:31 crc kubenswrapper[4856]: I1203 09:23:31.582641 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vw9qx" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.644874 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss"] Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.646913 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.655301 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss"] Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.656244 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.733231 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psm2z\" (UniqueName: \"kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.733375 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.733435 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.835024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psm2z\" (UniqueName: \"kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.835105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.835373 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.835527 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.835678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.853178 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psm2z\" (UniqueName: \"kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:46 crc kubenswrapper[4856]: I1203 09:23:46.965816 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.398892 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n2k6t" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" containerID="cri-o://48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30" gracePeriod=15 Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.401175 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss"] Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.503433 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" event={"ID":"3f5a808d-c561-4cf2-bf26-920fa9fe2d82","Type":"ContainerStarted","Data":"9e39f0b95c82a347fec52b09bf90d83c0b22c3043a57921cd89a106da1460065"} Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.798263 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n2k6t_5e734ba6-dfdf-406c-a466-579ad773f451/console/0.log" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.798550 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttg2\" (UniqueName: \"kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949706 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949765 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949842 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949907 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.949933 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.950029 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config\") pod \"5e734ba6-dfdf-406c-a466-579ad773f451\" (UID: \"5e734ba6-dfdf-406c-a466-579ad773f451\") " Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.951177 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.952740 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.952767 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca" (OuterVolumeSpecName: "service-ca") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.952844 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config" (OuterVolumeSpecName: "console-config") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.953136 4856 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.953183 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.953202 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.953221 4856 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e734ba6-dfdf-406c-a466-579ad773f451-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.956355 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.956432 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2" (OuterVolumeSpecName: "kube-api-access-9ttg2") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "kube-api-access-9ttg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:23:47 crc kubenswrapper[4856]: I1203 09:23:47.957035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5e734ba6-dfdf-406c-a466-579ad773f451" (UID: "5e734ba6-dfdf-406c-a466-579ad773f451"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.054330 4856 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.054374 4856 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e734ba6-dfdf-406c-a466-579ad773f451-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.054387 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttg2\" (UniqueName: \"kubernetes.io/projected/5e734ba6-dfdf-406c-a466-579ad773f451-kube-api-access-9ttg2\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.516897 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerID="617803b86d78c7a87f1a473545d8dd8bbcab8c220e036691ed19c45e88ce3f3c" exitCode=0 Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.516990 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" event={"ID":"3f5a808d-c561-4cf2-bf26-920fa9fe2d82","Type":"ContainerDied","Data":"617803b86d78c7a87f1a473545d8dd8bbcab8c220e036691ed19c45e88ce3f3c"} Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521485 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n2k6t_5e734ba6-dfdf-406c-a466-579ad773f451/console/0.log" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521530 4856 generic.go:334] "Generic (PLEG): container finished" podID="5e734ba6-dfdf-406c-a466-579ad773f451" containerID="48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30" exitCode=2 Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2k6t" event={"ID":"5e734ba6-dfdf-406c-a466-579ad773f451","Type":"ContainerDied","Data":"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30"} Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2k6t" event={"ID":"5e734ba6-dfdf-406c-a466-579ad773f451","Type":"ContainerDied","Data":"0ae45e18f49e7cbeb05a4939a0d916a3c41b17cf5b2644f51187b262fbfeb3ac"} Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521607 4856 scope.go:117] "RemoveContainer" containerID="48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.521717 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2k6t" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.549972 4856 scope.go:117] "RemoveContainer" containerID="48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30" Dec 03 09:23:48 crc kubenswrapper[4856]: E1203 09:23:48.552084 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30\": container with ID starting with 48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30 not found: ID does not exist" containerID="48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.552159 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30"} err="failed to get container status \"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30\": rpc error: code = NotFound desc = could not find container \"48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30\": container with ID starting with 48a5336f3a2e8710b8e95e5d8656452e1ed7129a58b19fbf94626f260c7c8c30 not found: ID does not exist" Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.555998 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.566277 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n2k6t"] Dec 03 09:23:48 crc kubenswrapper[4856]: I1203 09:23:48.701174 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" path="/var/lib/kubelet/pods/5e734ba6-dfdf-406c-a466-579ad773f451/volumes" Dec 03 09:23:50 crc kubenswrapper[4856]: I1203 09:23:50.542135 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerID="0c29ddc96fc6537bd0aecdda11b0fdea4ec15831a3d586e9c1205d3d21b7399c" exitCode=0 Dec 03 09:23:50 crc kubenswrapper[4856]: I1203 09:23:50.542231 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" event={"ID":"3f5a808d-c561-4cf2-bf26-920fa9fe2d82","Type":"ContainerDied","Data":"0c29ddc96fc6537bd0aecdda11b0fdea4ec15831a3d586e9c1205d3d21b7399c"} Dec 03 09:23:51 crc kubenswrapper[4856]: I1203 09:23:51.553270 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerID="dee0255739c589b100b0ed3665021e01defd7552eee20adacf3f18e7e95d39b9" exitCode=0 Dec 03 09:23:51 crc kubenswrapper[4856]: I1203 09:23:51.553318 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" event={"ID":"3f5a808d-c561-4cf2-bf26-920fa9fe2d82","Type":"ContainerDied","Data":"dee0255739c589b100b0ed3665021e01defd7552eee20adacf3f18e7e95d39b9"} Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.761748 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.921362 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle\") pod \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.921521 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psm2z\" (UniqueName: \"kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z\") pod \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.921563 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util\") pod \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\" (UID: \"3f5a808d-c561-4cf2-bf26-920fa9fe2d82\") " Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.922715 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle" (OuterVolumeSpecName: "bundle") pod "3f5a808d-c561-4cf2-bf26-920fa9fe2d82" (UID: "3f5a808d-c561-4cf2-bf26-920fa9fe2d82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.926561 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z" (OuterVolumeSpecName: "kube-api-access-psm2z") pod "3f5a808d-c561-4cf2-bf26-920fa9fe2d82" (UID: "3f5a808d-c561-4cf2-bf26-920fa9fe2d82"). InnerVolumeSpecName "kube-api-access-psm2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:23:52 crc kubenswrapper[4856]: I1203 09:23:52.974993 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util" (OuterVolumeSpecName: "util") pod "3f5a808d-c561-4cf2-bf26-920fa9fe2d82" (UID: "3f5a808d-c561-4cf2-bf26-920fa9fe2d82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.023520 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.023570 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psm2z\" (UniqueName: \"kubernetes.io/projected/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-kube-api-access-psm2z\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.023585 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f5a808d-c561-4cf2-bf26-920fa9fe2d82-util\") on node \"crc\" DevicePath \"\"" Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.569869 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" event={"ID":"3f5a808d-c561-4cf2-bf26-920fa9fe2d82","Type":"ContainerDied","Data":"9e39f0b95c82a347fec52b09bf90d83c0b22c3043a57921cd89a106da1460065"} Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.569915 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e39f0b95c82a347fec52b09bf90d83c0b22c3043a57921cd89a106da1460065" Dec 03 09:23:53 crc kubenswrapper[4856]: I1203 09:23:53.570081 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.613467 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f"] Dec 03 09:24:01 crc kubenswrapper[4856]: E1203 09:24:01.614141 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="extract" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614154 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="extract" Dec 03 09:24:01 crc kubenswrapper[4856]: E1203 09:24:01.614165 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614171 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" Dec 03 09:24:01 crc kubenswrapper[4856]: E1203 09:24:01.614178 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="util" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614184 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="util" Dec 03 09:24:01 crc kubenswrapper[4856]: E1203 09:24:01.614197 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="pull" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614203 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="pull" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614291 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5a808d-c561-4cf2-bf26-920fa9fe2d82" containerName="extract" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614303 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e734ba6-dfdf-406c-a466-579ad773f451" containerName="console" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.614673 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.617084 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.617129 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.619590 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-n4jvb" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.620056 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.620103 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.643109 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f"] Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.741539 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-webhook-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.741613 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr95\" (UniqueName: \"kubernetes.io/projected/b3ac3c18-dee0-4de8-8380-93060d971722-kube-api-access-7lr95\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.741780 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-apiservice-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.843013 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-webhook-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.843079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr95\" (UniqueName: \"kubernetes.io/projected/b3ac3c18-dee0-4de8-8380-93060d971722-kube-api-access-7lr95\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.844245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-apiservice-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.852771 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-apiservice-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.852872 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3c18-dee0-4de8-8380-93060d971722-webhook-cert\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.887381 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr95\" (UniqueName: \"kubernetes.io/projected/b3ac3c18-dee0-4de8-8380-93060d971722-kube-api-access-7lr95\") pod \"metallb-operator-controller-manager-67d6765696-kvt2f\" (UID: \"b3ac3c18-dee0-4de8-8380-93060d971722\") " pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.930740 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.964265 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx"] Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.965322 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.967655 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.971337 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-22kzr" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.971600 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 09:24:01 crc kubenswrapper[4856]: I1203 09:24:01.985386 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx"] Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.048168 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-webhook-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.048257 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-apiservice-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.048286 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6dv\" (UniqueName: \"kubernetes.io/projected/ae371cb4-e8b4-40ea-a590-884cf5feae1f-kube-api-access-pp6dv\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.150797 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-webhook-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.151270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6dv\" (UniqueName: \"kubernetes.io/projected/ae371cb4-e8b4-40ea-a590-884cf5feae1f-kube-api-access-pp6dv\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.151300 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-apiservice-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.156784 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-apiservice-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.161524 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae371cb4-e8b4-40ea-a590-884cf5feae1f-webhook-cert\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.178377 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6dv\" (UniqueName: \"kubernetes.io/projected/ae371cb4-e8b4-40ea-a590-884cf5feae1f-kube-api-access-pp6dv\") pod \"metallb-operator-webhook-server-db87bc449-r6ztx\" (UID: \"ae371cb4-e8b4-40ea-a590-884cf5feae1f\") " pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.286946 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f"] Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.304731 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.623442 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" event={"ID":"b3ac3c18-dee0-4de8-8380-93060d971722","Type":"ContainerStarted","Data":"1838bc588f512ed61b359f65ff7389577404a21c1a427a12d1c4c29acd16b929"} Dec 03 09:24:02 crc kubenswrapper[4856]: I1203 09:24:02.623953 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx"] Dec 03 09:24:02 crc kubenswrapper[4856]: W1203 09:24:02.633650 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae371cb4_e8b4_40ea_a590_884cf5feae1f.slice/crio-2d626ecb171ffa0755b289aaae00423e22768d745d08fa0811f5959b96296c81 WatchSource:0}: Error finding container 2d626ecb171ffa0755b289aaae00423e22768d745d08fa0811f5959b96296c81: Status 404 returned error can't find the container with id 2d626ecb171ffa0755b289aaae00423e22768d745d08fa0811f5959b96296c81 Dec 03 09:24:03 crc kubenswrapper[4856]: I1203 09:24:03.632059 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" event={"ID":"ae371cb4-e8b4-40ea-a590-884cf5feae1f","Type":"ContainerStarted","Data":"2d626ecb171ffa0755b289aaae00423e22768d745d08fa0811f5959b96296c81"} Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.658859 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" event={"ID":"b3ac3c18-dee0-4de8-8380-93060d971722","Type":"ContainerStarted","Data":"cc78fcfd0fadf7348093f3337a09bc5ec1e772bc73aec4514b41e6474298ba26"} Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.658929 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.660403 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" event={"ID":"ae371cb4-e8b4-40ea-a590-884cf5feae1f","Type":"ContainerStarted","Data":"ba71be4c657eb460e8d6b320f6f81ae4ecd99e3cb8593cfbf1c29227415d12e4"} Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.660579 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.684500 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" podStartSLOduration=3.552174888 podStartE2EDuration="6.684471584s" podCreationTimestamp="2025-12-03 09:24:01 +0000 UTC" firstStartedPulling="2025-12-03 09:24:02.304680494 +0000 UTC m=+710.487572795" lastFinishedPulling="2025-12-03 09:24:05.43697718 +0000 UTC m=+713.619869491" observedRunningTime="2025-12-03 09:24:07.679643328 +0000 UTC m=+715.862535629" watchObservedRunningTime="2025-12-03 09:24:07.684471584 +0000 UTC m=+715.867363885" Dec 03 09:24:07 crc kubenswrapper[4856]: I1203 09:24:07.708475 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" podStartSLOduration=2.297997904 podStartE2EDuration="6.708441538s" podCreationTimestamp="2025-12-03 09:24:01 +0000 UTC" firstStartedPulling="2025-12-03 09:24:02.650306356 +0000 UTC m=+710.833198647" lastFinishedPulling="2025-12-03 09:24:07.06074998 +0000 UTC m=+715.243642281" observedRunningTime="2025-12-03 09:24:07.705966184 +0000 UTC m=+715.888858485" watchObservedRunningTime="2025-12-03 09:24:07.708441538 +0000 UTC m=+715.891333839" Dec 03 09:24:22 crc kubenswrapper[4856]: I1203 09:24:22.310103 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-db87bc449-r6ztx" Dec 03 09:24:41 crc kubenswrapper[4856]: I1203 09:24:41.933200 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67d6765696-kvt2f" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.706193 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-shn9z"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.708895 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.714229 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.714539 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-78nwg" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.714604 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.720842 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.721894 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.725166 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.747373 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-conf\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816611 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-reloader\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7p2p\" (UniqueName: \"kubernetes.io/projected/65ad74a6-7267-45f3-b6c2-898a3906758b-kube-api-access-d7p2p\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816784 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65ad74a6-7267-45f3-b6c2-898a3906758b-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816901 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics-certs\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.816987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8ss\" (UniqueName: \"kubernetes.io/projected/d97a180e-36f0-45d7-b2de-7b92d84f26d8-kube-api-access-jm8ss\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.817086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.817196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-startup\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.817273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-sockets\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.838048 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q26bm"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.839568 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q26bm" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.843242 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.843280 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.845849 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.847499 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zmpnr" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.892007 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-hpb7s"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.892897 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.902349 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.917723 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-hpb7s"] Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.917933 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-startup\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.920576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-sockets\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.920708 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-conf\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.920835 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-reloader\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.920963 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7p2p\" (UniqueName: \"kubernetes.io/projected/65ad74a6-7267-45f3-b6c2-898a3906758b-kube-api-access-d7p2p\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.921205 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65ad74a6-7267-45f3-b6c2-898a3906758b-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.921460 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics-certs\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.921568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8ss\" (UniqueName: \"kubernetes.io/projected/d97a180e-36f0-45d7-b2de-7b92d84f26d8-kube-api-access-jm8ss\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.921670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.922137 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.918789 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-startup\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.922494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-sockets\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.922758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-frr-conf\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.923061 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d97a180e-36f0-45d7-b2de-7b92d84f26d8-reloader\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.940938 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97a180e-36f0-45d7-b2de-7b92d84f26d8-metrics-certs\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.943602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65ad74a6-7267-45f3-b6c2-898a3906758b-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.962518 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8ss\" (UniqueName: \"kubernetes.io/projected/d97a180e-36f0-45d7-b2de-7b92d84f26d8-kube-api-access-jm8ss\") pod \"frr-k8s-shn9z\" (UID: \"d97a180e-36f0-45d7-b2de-7b92d84f26d8\") " pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:42 crc kubenswrapper[4856]: I1203 09:24:42.968128 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7p2p\" (UniqueName: \"kubernetes.io/projected/65ad74a6-7267-45f3-b6c2-898a3906758b-kube-api-access-d7p2p\") pod \"frr-k8s-webhook-server-7fcb986d4-bcpx9\" (UID: \"65ad74a6-7267-45f3-b6c2-898a3906758b\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.022696 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.022750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-metrics-certs\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.022815 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-cert\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.022929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-metrics-certs\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.022994 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmmf\" (UniqueName: \"kubernetes.io/projected/0142725c-0a39-4e1c-bef6-a3027f105162-kube-api-access-llmmf\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.023050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ngl\" (UniqueName: \"kubernetes.io/projected/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-kube-api-access-q5ngl\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.023095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0142725c-0a39-4e1c-bef6-a3027f105162-metallb-excludel2\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.029526 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.043030 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.124771 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0142725c-0a39-4e1c-bef6-a3027f105162-metallb-excludel2\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.124846 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.124876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-metrics-certs\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.124935 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-cert\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.124983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-metrics-certs\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.125015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmmf\" (UniqueName: \"kubernetes.io/projected/0142725c-0a39-4e1c-bef6-a3027f105162-kube-api-access-llmmf\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: E1203 09:24:43.125033 4856 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.125052 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ngl\" (UniqueName: \"kubernetes.io/projected/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-kube-api-access-q5ngl\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: E1203 09:24:43.125118 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist podName:0142725c-0a39-4e1c-bef6-a3027f105162 nodeName:}" failed. No retries permitted until 2025-12-03 09:24:43.625100432 +0000 UTC m=+751.807992733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist") pod "speaker-q26bm" (UID: "0142725c-0a39-4e1c-bef6-a3027f105162") : secret "metallb-memberlist" not found Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.125912 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0142725c-0a39-4e1c-bef6-a3027f105162-metallb-excludel2\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.127621 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.130583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-metrics-certs\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.132446 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-metrics-certs\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.138904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-cert\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.152278 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ngl\" (UniqueName: \"kubernetes.io/projected/c049cfb9-a9ea-4348-88d4-40aacaf0c01a-kube-api-access-q5ngl\") pod \"controller-f8648f98b-hpb7s\" (UID: \"c049cfb9-a9ea-4348-88d4-40aacaf0c01a\") " pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.156059 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmmf\" (UniqueName: \"kubernetes.io/projected/0142725c-0a39-4e1c-bef6-a3027f105162-kube-api-access-llmmf\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.208969 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.279934 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9"] Dec 03 09:24:43 crc kubenswrapper[4856]: W1203 09:24:43.292019 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65ad74a6_7267_45f3_b6c2_898a3906758b.slice/crio-7c0dc06b030f53e532f305c0252c3d217b61641b045a77c91e9cd3fc6170dfa3 WatchSource:0}: Error finding container 7c0dc06b030f53e532f305c0252c3d217b61641b045a77c91e9cd3fc6170dfa3: Status 404 returned error can't find the container with id 7c0dc06b030f53e532f305c0252c3d217b61641b045a77c91e9cd3fc6170dfa3 Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.629391 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-hpb7s"] Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.633481 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:43 crc kubenswrapper[4856]: E1203 09:24:43.634038 4856 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 09:24:43 crc kubenswrapper[4856]: E1203 09:24:43.634109 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist podName:0142725c-0a39-4e1c-bef6-a3027f105162 nodeName:}" failed. No retries permitted until 2025-12-03 09:24:44.634090168 +0000 UTC m=+752.816982469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist") pod "speaker-q26bm" (UID: "0142725c-0a39-4e1c-bef6-a3027f105162") : secret "metallb-memberlist" not found Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.865973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hpb7s" event={"ID":"c049cfb9-a9ea-4348-88d4-40aacaf0c01a","Type":"ContainerStarted","Data":"299cddab4d08e3c5153788a5223a481891c0e1dea279b8c1214f72abbadeb02d"} Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.866027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hpb7s" event={"ID":"c049cfb9-a9ea-4348-88d4-40aacaf0c01a","Type":"ContainerStarted","Data":"3aa1a3b169ba13284156ca1048c6fb5091912a4ad6a1e28482db1d6d552ab820"} Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.867528 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" event={"ID":"65ad74a6-7267-45f3-b6c2-898a3906758b","Type":"ContainerStarted","Data":"7c0dc06b030f53e532f305c0252c3d217b61641b045a77c91e9cd3fc6170dfa3"} Dec 03 09:24:43 crc kubenswrapper[4856]: I1203 09:24:43.868739 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"dbb3b430be7250021480e1a938b9c699bce5c1574cec59a7a21fc50751b9609d"} Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.647523 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.666104 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0142725c-0a39-4e1c-bef6-a3027f105162-memberlist\") pod \"speaker-q26bm\" (UID: \"0142725c-0a39-4e1c-bef6-a3027f105162\") " pod="metallb-system/speaker-q26bm" Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.889071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hpb7s" event={"ID":"c049cfb9-a9ea-4348-88d4-40aacaf0c01a","Type":"ContainerStarted","Data":"75c7cf16dbf4b362c80facbe0051957ee54b00e4be75d6ddd0f4281ce8444069"} Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.890576 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.924639 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-hpb7s" podStartSLOduration=2.924614527 podStartE2EDuration="2.924614527s" podCreationTimestamp="2025-12-03 09:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:24:44.921368322 +0000 UTC m=+753.104260623" watchObservedRunningTime="2025-12-03 09:24:44.924614527 +0000 UTC m=+753.107506818" Dec 03 09:24:44 crc kubenswrapper[4856]: I1203 09:24:44.957723 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q26bm" Dec 03 09:24:45 crc kubenswrapper[4856]: I1203 09:24:45.899299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q26bm" event={"ID":"0142725c-0a39-4e1c-bef6-a3027f105162","Type":"ContainerStarted","Data":"581bd87ac80dc5ebd427ac5627c7227b7e64ff14bf11d7112b5195351703a050"} Dec 03 09:24:45 crc kubenswrapper[4856]: I1203 09:24:45.899648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q26bm" event={"ID":"0142725c-0a39-4e1c-bef6-a3027f105162","Type":"ContainerStarted","Data":"360a3578053b4df3a732488f8d400fcaf9ec756bd591adec5f48d66d0279858d"} Dec 03 09:24:45 crc kubenswrapper[4856]: I1203 09:24:45.899660 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q26bm" event={"ID":"0142725c-0a39-4e1c-bef6-a3027f105162","Type":"ContainerStarted","Data":"7f5407fe1ed172fe62ef135e134c0544885c0cd017e3946bc6ccf263b54603cb"} Dec 03 09:24:45 crc kubenswrapper[4856]: I1203 09:24:45.900087 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q26bm" Dec 03 09:24:50 crc kubenswrapper[4856]: I1203 09:24:50.935457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" event={"ID":"65ad74a6-7267-45f3-b6c2-898a3906758b","Type":"ContainerStarted","Data":"e4764882f6cb5ddea598710feebdbf8dd225c5b7ded8bb5bb58f8d9d024e667a"} Dec 03 09:24:50 crc kubenswrapper[4856]: I1203 09:24:50.937268 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:24:50 crc kubenswrapper[4856]: I1203 09:24:50.938721 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"5f9dce0beb9b567bab1deb4a63d7c426f82e8c876945dc91911e9f8821550fbc"} Dec 03 09:24:50 crc kubenswrapper[4856]: I1203 09:24:50.962117 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" podStartSLOduration=1.521957668 podStartE2EDuration="8.962101505s" podCreationTimestamp="2025-12-03 09:24:42 +0000 UTC" firstStartedPulling="2025-12-03 09:24:43.299774161 +0000 UTC m=+751.482666462" lastFinishedPulling="2025-12-03 09:24:50.739917998 +0000 UTC m=+758.922810299" observedRunningTime="2025-12-03 09:24:50.958435679 +0000 UTC m=+759.141327990" watchObservedRunningTime="2025-12-03 09:24:50.962101505 +0000 UTC m=+759.144993806" Dec 03 09:24:50 crc kubenswrapper[4856]: I1203 09:24:50.964043 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q26bm" podStartSLOduration=8.964032315 podStartE2EDuration="8.964032315s" podCreationTimestamp="2025-12-03 09:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:24:45.926240203 +0000 UTC m=+754.109132514" watchObservedRunningTime="2025-12-03 09:24:50.964032315 +0000 UTC m=+759.146924616" Dec 03 09:24:51 crc kubenswrapper[4856]: I1203 09:24:51.947016 4856 generic.go:334] "Generic (PLEG): container finished" podID="d97a180e-36f0-45d7-b2de-7b92d84f26d8" containerID="5f9dce0beb9b567bab1deb4a63d7c426f82e8c876945dc91911e9f8821550fbc" exitCode=0 Dec 03 09:24:51 crc kubenswrapper[4856]: I1203 09:24:51.947358 4856 generic.go:334] "Generic (PLEG): container finished" podID="d97a180e-36f0-45d7-b2de-7b92d84f26d8" containerID="56b2384b6bc05dc10437a11516cf1ef76951f4981bd65cc581fb2dba736f095b" exitCode=0 Dec 03 09:24:51 crc kubenswrapper[4856]: I1203 09:24:51.947085 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerDied","Data":"5f9dce0beb9b567bab1deb4a63d7c426f82e8c876945dc91911e9f8821550fbc"} Dec 03 09:24:51 crc kubenswrapper[4856]: I1203 09:24:51.947462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerDied","Data":"56b2384b6bc05dc10437a11516cf1ef76951f4981bd65cc581fb2dba736f095b"} Dec 03 09:24:52 crc kubenswrapper[4856]: I1203 09:24:52.958910 4856 generic.go:334] "Generic (PLEG): container finished" podID="d97a180e-36f0-45d7-b2de-7b92d84f26d8" containerID="dc39ac24236d6dd42166fced6ebd476c3030568ff13162d5806422b617bbcd61" exitCode=0 Dec 03 09:24:52 crc kubenswrapper[4856]: I1203 09:24:52.958960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerDied","Data":"dc39ac24236d6dd42166fced6ebd476c3030568ff13162d5806422b617bbcd61"} Dec 03 09:24:53 crc kubenswrapper[4856]: I1203 09:24:53.213123 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-hpb7s" Dec 03 09:24:53 crc kubenswrapper[4856]: I1203 09:24:53.970616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"ac3f9d44ee1933f4016121bffe30055af6003925afc692cbb0b9626b98e6e36c"} Dec 03 09:24:53 crc kubenswrapper[4856]: I1203 09:24:53.970899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"aa81d0cb2544ddcffd613d2627acd6ed69fddecd1b96659d2238b524727a453a"} Dec 03 09:24:53 crc kubenswrapper[4856]: I1203 09:24:53.970911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"7bac90e324857fb0ab613ea674d1688f0fb703c37bc93de861b8d4b71632eee6"} Dec 03 09:24:53 crc kubenswrapper[4856]: I1203 09:24:53.970919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"e0d60f729f56fb917aa9a95394e3a7bac03ecdbde389a092bfb98faf41adab37"} Dec 03 09:24:54 crc kubenswrapper[4856]: I1203 09:24:54.983665 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"f3a4c14def1bc73eb07229f063917577730fa224e92832fb8ace52787d38438f"} Dec 03 09:24:54 crc kubenswrapper[4856]: I1203 09:24:54.983723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-shn9z" event={"ID":"d97a180e-36f0-45d7-b2de-7b92d84f26d8","Type":"ContainerStarted","Data":"43a174ce9e65b1061222e3185efea250f07a5a847392517d543e6d816deeb2b1"} Dec 03 09:24:54 crc kubenswrapper[4856]: I1203 09:24:54.984136 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:55 crc kubenswrapper[4856]: I1203 09:24:55.012974 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-shn9z" podStartSLOduration=5.488347335 podStartE2EDuration="13.012957212s" podCreationTimestamp="2025-12-03 09:24:42 +0000 UTC" firstStartedPulling="2025-12-03 09:24:43.236958965 +0000 UTC m=+751.419851266" lastFinishedPulling="2025-12-03 09:24:50.761568842 +0000 UTC m=+758.944461143" observedRunningTime="2025-12-03 09:24:55.010358124 +0000 UTC m=+763.193250445" watchObservedRunningTime="2025-12-03 09:24:55.012957212 +0000 UTC m=+763.195849513" Dec 03 09:24:58 crc kubenswrapper[4856]: I1203 09:24:58.029919 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:24:58 crc kubenswrapper[4856]: I1203 09:24:58.068379 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:25:03 crc kubenswrapper[4856]: I1203 09:25:03.033423 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-shn9z" Dec 03 09:25:03 crc kubenswrapper[4856]: I1203 09:25:03.048447 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bcpx9" Dec 03 09:25:04 crc kubenswrapper[4856]: I1203 09:25:04.962404 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q26bm" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.773353 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.774657 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.779904 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9vmt4" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.779903 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.780263 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.819627 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pl8j\" (UniqueName: \"kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j\") pod \"openstack-operator-index-qvw55\" (UID: \"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c\") " pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.835157 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.920821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pl8j\" (UniqueName: \"kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j\") pod \"openstack-operator-index-qvw55\" (UID: \"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c\") " pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:07 crc kubenswrapper[4856]: I1203 09:25:07.941869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pl8j\" (UniqueName: \"kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j\") pod \"openstack-operator-index-qvw55\" (UID: \"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c\") " pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:08 crc kubenswrapper[4856]: I1203 09:25:08.093711 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:08 crc kubenswrapper[4856]: I1203 09:25:08.336984 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:08 crc kubenswrapper[4856]: W1203 09:25:08.348090 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5ce16cd_1c0b_4547_ae3f_a47dc5ce0f8c.slice/crio-2f416fb71e98ed615d44f4ca1fcfbe3f3f68e950129a54cfba6ba0bdd546e52c WatchSource:0}: Error finding container 2f416fb71e98ed615d44f4ca1fcfbe3f3f68e950129a54cfba6ba0bdd546e52c: Status 404 returned error can't find the container with id 2f416fb71e98ed615d44f4ca1fcfbe3f3f68e950129a54cfba6ba0bdd546e52c Dec 03 09:25:09 crc kubenswrapper[4856]: I1203 09:25:09.067843 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvw55" event={"ID":"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c","Type":"ContainerStarted","Data":"2f416fb71e98ed615d44f4ca1fcfbe3f3f68e950129a54cfba6ba0bdd546e52c"} Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.151511 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.558219 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nj5zz"] Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.559168 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.571528 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nj5zz"] Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.586273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbxw\" (UniqueName: \"kubernetes.io/projected/43d6261a-49c7-40ca-8403-1fa273ef863c-kube-api-access-5nbxw\") pod \"openstack-operator-index-nj5zz\" (UID: \"43d6261a-49c7-40ca-8403-1fa273ef863c\") " pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.687620 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nbxw\" (UniqueName: \"kubernetes.io/projected/43d6261a-49c7-40ca-8403-1fa273ef863c-kube-api-access-5nbxw\") pod \"openstack-operator-index-nj5zz\" (UID: \"43d6261a-49c7-40ca-8403-1fa273ef863c\") " pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.713710 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nbxw\" (UniqueName: \"kubernetes.io/projected/43d6261a-49c7-40ca-8403-1fa273ef863c-kube-api-access-5nbxw\") pod \"openstack-operator-index-nj5zz\" (UID: \"43d6261a-49c7-40ca-8403-1fa273ef863c\") " pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:12 crc kubenswrapper[4856]: I1203 09:25:12.885177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:13 crc kubenswrapper[4856]: I1203 09:25:13.413696 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nj5zz"] Dec 03 09:25:13 crc kubenswrapper[4856]: W1203 09:25:13.418050 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43d6261a_49c7_40ca_8403_1fa273ef863c.slice/crio-0cd844f088691bfe153b45d775fd651c0e3c7dd10855fb5b8e99317ed863ad22 WatchSource:0}: Error finding container 0cd844f088691bfe153b45d775fd651c0e3c7dd10855fb5b8e99317ed863ad22: Status 404 returned error can't find the container with id 0cd844f088691bfe153b45d775fd651c0e3c7dd10855fb5b8e99317ed863ad22 Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.104019 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvw55" event={"ID":"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c","Type":"ContainerStarted","Data":"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb"} Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.104130 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qvw55" podUID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" containerName="registry-server" containerID="cri-o://df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb" gracePeriod=2 Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.112187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nj5zz" event={"ID":"43d6261a-49c7-40ca-8403-1fa273ef863c","Type":"ContainerStarted","Data":"f9453d8ed9ab6003ec1c53a983aa3b21982b70546a0c39f0ef0473ecef913890"} Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.112312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nj5zz" event={"ID":"43d6261a-49c7-40ca-8403-1fa273ef863c","Type":"ContainerStarted","Data":"0cd844f088691bfe153b45d775fd651c0e3c7dd10855fb5b8e99317ed863ad22"} Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.130483 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qvw55" podStartSLOduration=2.425899371 podStartE2EDuration="7.130462336s" podCreationTimestamp="2025-12-03 09:25:07 +0000 UTC" firstStartedPulling="2025-12-03 09:25:08.353798024 +0000 UTC m=+776.536690325" lastFinishedPulling="2025-12-03 09:25:13.058360989 +0000 UTC m=+781.241253290" observedRunningTime="2025-12-03 09:25:14.12985572 +0000 UTC m=+782.312748031" watchObservedRunningTime="2025-12-03 09:25:14.130462336 +0000 UTC m=+782.313354637" Dec 03 09:25:14 crc kubenswrapper[4856]: I1203 09:25:14.161484 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nj5zz" podStartSLOduration=2.088916878 podStartE2EDuration="2.16146576s" podCreationTimestamp="2025-12-03 09:25:12 +0000 UTC" firstStartedPulling="2025-12-03 09:25:13.421864027 +0000 UTC m=+781.604756328" lastFinishedPulling="2025-12-03 09:25:13.494412909 +0000 UTC m=+781.677305210" observedRunningTime="2025-12-03 09:25:14.157192319 +0000 UTC m=+782.340084620" watchObservedRunningTime="2025-12-03 09:25:14.16146576 +0000 UTC m=+782.344358061" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.001146 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.127168 4856 generic.go:334] "Generic (PLEG): container finished" podID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" containerID="df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb" exitCode=0 Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.127261 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvw55" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.127296 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvw55" event={"ID":"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c","Type":"ContainerDied","Data":"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb"} Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.127328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvw55" event={"ID":"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c","Type":"ContainerDied","Data":"2f416fb71e98ed615d44f4ca1fcfbe3f3f68e950129a54cfba6ba0bdd546e52c"} Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.127348 4856 scope.go:117] "RemoveContainer" containerID="df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.131968 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pl8j\" (UniqueName: \"kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j\") pod \"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c\" (UID: \"a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c\") " Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.141392 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j" (OuterVolumeSpecName: "kube-api-access-8pl8j") pod "a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" (UID: "a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c"). InnerVolumeSpecName "kube-api-access-8pl8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.158857 4856 scope.go:117] "RemoveContainer" containerID="df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb" Dec 03 09:25:15 crc kubenswrapper[4856]: E1203 09:25:15.159375 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb\": container with ID starting with df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb not found: ID does not exist" containerID="df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.159412 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb"} err="failed to get container status \"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb\": rpc error: code = NotFound desc = could not find container \"df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb\": container with ID starting with df3dcda49eb6785c125ecde153ddf768184a630530455f5bfad8c9b7cb255dfb not found: ID does not exist" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.233288 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pl8j\" (UniqueName: \"kubernetes.io/projected/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c-kube-api-access-8pl8j\") on node \"crc\" DevicePath \"\"" Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.460276 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:15 crc kubenswrapper[4856]: I1203 09:25:15.464680 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qvw55"] Dec 03 09:25:16 crc kubenswrapper[4856]: I1203 09:25:16.700602 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" path="/var/lib/kubelet/pods/a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c/volumes" Dec 03 09:25:18 crc kubenswrapper[4856]: I1203 09:25:18.750120 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 09:25:22 crc kubenswrapper[4856]: I1203 09:25:22.759408 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:25:22 crc kubenswrapper[4856]: I1203 09:25:22.759717 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:25:22 crc kubenswrapper[4856]: I1203 09:25:22.885334 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:22 crc kubenswrapper[4856]: I1203 09:25:22.885389 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:22 crc kubenswrapper[4856]: I1203 09:25:22.909230 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:23 crc kubenswrapper[4856]: I1203 09:25:23.220041 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nj5zz" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.193562 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d"] Dec 03 09:25:30 crc kubenswrapper[4856]: E1203 09:25:30.194479 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" containerName="registry-server" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.194494 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" containerName="registry-server" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.194647 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ce16cd-1c0b-4547-ae3f-a47dc5ce0f8c" containerName="registry-server" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.195685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.197988 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fwj9c" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.213920 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d"] Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.369660 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.369745 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.369798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6x4\" (UniqueName: \"kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.470625 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.470714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.470777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6x4\" (UniqueName: \"kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.471311 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.471343 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.496029 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6x4\" (UniqueName: \"kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4\") pod \"464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.514934 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:30 crc kubenswrapper[4856]: I1203 09:25:30.920188 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d"] Dec 03 09:25:30 crc kubenswrapper[4856]: W1203 09:25:30.927925 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d8caec_af0a_4a43_97d4_b790eb73850c.slice/crio-fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd WatchSource:0}: Error finding container fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd: Status 404 returned error can't find the container with id fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd Dec 03 09:25:31 crc kubenswrapper[4856]: I1203 09:25:31.283388 4856 generic.go:334] "Generic (PLEG): container finished" podID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerID="150d016149d9b83ea73c98e60dd7ab80bd3903d5d24ffb42c8f0196afeeb95a0" exitCode=0 Dec 03 09:25:31 crc kubenswrapper[4856]: I1203 09:25:31.283440 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" event={"ID":"01d8caec-af0a-4a43-97d4-b790eb73850c","Type":"ContainerDied","Data":"150d016149d9b83ea73c98e60dd7ab80bd3903d5d24ffb42c8f0196afeeb95a0"} Dec 03 09:25:31 crc kubenswrapper[4856]: I1203 09:25:31.283738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" event={"ID":"01d8caec-af0a-4a43-97d4-b790eb73850c","Type":"ContainerStarted","Data":"fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd"} Dec 03 09:25:32 crc kubenswrapper[4856]: I1203 09:25:32.297549 4856 generic.go:334] "Generic (PLEG): container finished" podID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerID="100606e67571a9560d85b4a13281a812dfcd1d8c94ba1555dfe1b6378355efd0" exitCode=0 Dec 03 09:25:32 crc kubenswrapper[4856]: I1203 09:25:32.297670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" event={"ID":"01d8caec-af0a-4a43-97d4-b790eb73850c","Type":"ContainerDied","Data":"100606e67571a9560d85b4a13281a812dfcd1d8c94ba1555dfe1b6378355efd0"} Dec 03 09:25:33 crc kubenswrapper[4856]: I1203 09:25:33.305598 4856 generic.go:334] "Generic (PLEG): container finished" podID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerID="bb452b6784f46f4ebaaf8c7784e4a1d4b28592e376a7ca28afc8e66e474a2d76" exitCode=0 Dec 03 09:25:33 crc kubenswrapper[4856]: I1203 09:25:33.305648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" event={"ID":"01d8caec-af0a-4a43-97d4-b790eb73850c","Type":"ContainerDied","Data":"bb452b6784f46f4ebaaf8c7784e4a1d4b28592e376a7ca28afc8e66e474a2d76"} Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.619887 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.726507 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6x4\" (UniqueName: \"kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4\") pod \"01d8caec-af0a-4a43-97d4-b790eb73850c\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.726589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util\") pod \"01d8caec-af0a-4a43-97d4-b790eb73850c\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.726632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle\") pod \"01d8caec-af0a-4a43-97d4-b790eb73850c\" (UID: \"01d8caec-af0a-4a43-97d4-b790eb73850c\") " Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.727595 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle" (OuterVolumeSpecName: "bundle") pod "01d8caec-af0a-4a43-97d4-b790eb73850c" (UID: "01d8caec-af0a-4a43-97d4-b790eb73850c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.735186 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4" (OuterVolumeSpecName: "kube-api-access-wm6x4") pod "01d8caec-af0a-4a43-97d4-b790eb73850c" (UID: "01d8caec-af0a-4a43-97d4-b790eb73850c"). InnerVolumeSpecName "kube-api-access-wm6x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.741664 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util" (OuterVolumeSpecName: "util") pod "01d8caec-af0a-4a43-97d4-b790eb73850c" (UID: "01d8caec-af0a-4a43-97d4-b790eb73850c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.828118 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6x4\" (UniqueName: \"kubernetes.io/projected/01d8caec-af0a-4a43-97d4-b790eb73850c-kube-api-access-wm6x4\") on node \"crc\" DevicePath \"\"" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.828489 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-util\") on node \"crc\" DevicePath \"\"" Dec 03 09:25:34 crc kubenswrapper[4856]: I1203 09:25:34.828553 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01d8caec-af0a-4a43-97d4-b790eb73850c-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:25:35 crc kubenswrapper[4856]: I1203 09:25:35.322935 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" event={"ID":"01d8caec-af0a-4a43-97d4-b790eb73850c","Type":"ContainerDied","Data":"fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd"} Dec 03 09:25:35 crc kubenswrapper[4856]: I1203 09:25:35.323414 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde8de61f2a1c01b04fb9fa98d71714c33c2f9f86396cb0a4411cd4ade5928bd" Dec 03 09:25:35 crc kubenswrapper[4856]: I1203 09:25:35.323025 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.093431 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg"] Dec 03 09:25:37 crc kubenswrapper[4856]: E1203 09:25:37.093857 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="pull" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.093877 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="pull" Dec 03 09:25:37 crc kubenswrapper[4856]: E1203 09:25:37.093887 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="util" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.093894 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="util" Dec 03 09:25:37 crc kubenswrapper[4856]: E1203 09:25:37.093908 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="extract" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.093914 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="extract" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.094034 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d8caec-af0a-4a43-97d4-b790eb73850c" containerName="extract" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.094669 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.097449 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-bh5ws" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.114191 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg"] Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.158353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvw9\" (UniqueName: \"kubernetes.io/projected/d0f9ba14-7b89-4373-a0f6-67ceb97ffb71-kube-api-access-nsvw9\") pod \"openstack-operator-controller-operator-6bf68648df-qkkzg\" (UID: \"d0f9ba14-7b89-4373-a0f6-67ceb97ffb71\") " pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.259519 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsvw9\" (UniqueName: \"kubernetes.io/projected/d0f9ba14-7b89-4373-a0f6-67ceb97ffb71-kube-api-access-nsvw9\") pod \"openstack-operator-controller-operator-6bf68648df-qkkzg\" (UID: \"d0f9ba14-7b89-4373-a0f6-67ceb97ffb71\") " pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.288925 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsvw9\" (UniqueName: \"kubernetes.io/projected/d0f9ba14-7b89-4373-a0f6-67ceb97ffb71-kube-api-access-nsvw9\") pod \"openstack-operator-controller-operator-6bf68648df-qkkzg\" (UID: \"d0f9ba14-7b89-4373-a0f6-67ceb97ffb71\") " pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.412585 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:37 crc kubenswrapper[4856]: I1203 09:25:37.762416 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg"] Dec 03 09:25:38 crc kubenswrapper[4856]: I1203 09:25:38.344689 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" event={"ID":"d0f9ba14-7b89-4373-a0f6-67ceb97ffb71","Type":"ContainerStarted","Data":"5669c283cf2df51c2dc25b9f5f0b34988f18ae6fe45fb2c47d9b59473e46c4d2"} Dec 03 09:25:42 crc kubenswrapper[4856]: I1203 09:25:42.377384 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" event={"ID":"d0f9ba14-7b89-4373-a0f6-67ceb97ffb71","Type":"ContainerStarted","Data":"17a90c5a64ad262bc943e997463a2abcb3ef3f8a349cfa0a70df7670d92c523a"} Dec 03 09:25:42 crc kubenswrapper[4856]: I1203 09:25:42.378920 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:42 crc kubenswrapper[4856]: I1203 09:25:42.412603 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" podStartSLOduration=1.035157437 podStartE2EDuration="5.412583276s" podCreationTimestamp="2025-12-03 09:25:37 +0000 UTC" firstStartedPulling="2025-12-03 09:25:37.760940774 +0000 UTC m=+805.943833075" lastFinishedPulling="2025-12-03 09:25:42.138366603 +0000 UTC m=+810.321258914" observedRunningTime="2025-12-03 09:25:42.407648328 +0000 UTC m=+810.590540639" watchObservedRunningTime="2025-12-03 09:25:42.412583276 +0000 UTC m=+810.595475577" Dec 03 09:25:47 crc kubenswrapper[4856]: I1203 09:25:47.415841 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6bf68648df-qkkzg" Dec 03 09:25:52 crc kubenswrapper[4856]: I1203 09:25:52.758861 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:25:52 crc kubenswrapper[4856]: I1203 09:25:52.759490 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.808962 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.811862 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.812974 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.815776 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sj6ws" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.819307 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.822992 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bhz2q" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.844730 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.853038 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.854416 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.858695 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.860100 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.863475 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wgqbx" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.863764 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kx9jf" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.884327 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.892295 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.905900 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.907309 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.915289 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zwww8" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.916036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgh7c\" (UniqueName: \"kubernetes.io/projected/ab444944-5290-47a9-a2ca-8c544c5350b6-kube-api-access-qgh7c\") pod \"barbican-operator-controller-manager-7d9dfd778-4m4j5\" (UID: \"ab444944-5290-47a9-a2ca-8c544c5350b6\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.916098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zldq\" (UniqueName: \"kubernetes.io/projected/445299d7-37a7-4fa0-a50c-e81643492293-kube-api-access-7zldq\") pod \"cinder-operator-controller-manager-859b6ccc6-cw2wq\" (UID: \"445299d7-37a7-4fa0-a50c-e81643492293\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.917800 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.938876 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.940295 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.946757 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-w9mxv" Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.962535 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb"] Dec 03 09:26:05 crc kubenswrapper[4856]: I1203 09:26:05.982882 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.004157 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.005646 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.009183 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-chxvc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.009418 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.013962 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017435 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mncd\" (UniqueName: \"kubernetes.io/projected/61066eb5-99e6-4ec9-9dea-3d2ecd8d456e-kube-api-access-5mncd\") pod \"heat-operator-controller-manager-5f64f6f8bb-8qjgb\" (UID: \"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qs86\" (UniqueName: \"kubernetes.io/projected/65f07af7-c89a-403d-866e-f98462398697-kube-api-access-6qs86\") pod \"designate-operator-controller-manager-78b4bc895b-2j5bq\" (UID: \"65f07af7-c89a-403d-866e-f98462398697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017575 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgh7c\" (UniqueName: \"kubernetes.io/projected/ab444944-5290-47a9-a2ca-8c544c5350b6-kube-api-access-qgh7c\") pod \"barbican-operator-controller-manager-7d9dfd778-4m4j5\" (UID: \"ab444944-5290-47a9-a2ca-8c544c5350b6\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017605 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zldq\" (UniqueName: \"kubernetes.io/projected/445299d7-37a7-4fa0-a50c-e81643492293-kube-api-access-7zldq\") pod \"cinder-operator-controller-manager-859b6ccc6-cw2wq\" (UID: \"445299d7-37a7-4fa0-a50c-e81643492293\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017625 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26zc\" (UniqueName: \"kubernetes.io/projected/0f952368-5565-442c-8bcb-aa61130cb3c7-kube-api-access-m26zc\") pod \"glance-operator-controller-manager-77987cd8cd-bmn9s\" (UID: \"0f952368-5565-442c-8bcb-aa61130cb3c7\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.017656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjssw\" (UniqueName: \"kubernetes.io/projected/4a3f5eb0-4264-4034-8c1f-4d8b53af8b21-kube-api-access-pjssw\") pod \"horizon-operator-controller-manager-68c6d99b8f-88k4v\" (UID: \"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.046201 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.047241 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.051955 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zldq\" (UniqueName: \"kubernetes.io/projected/445299d7-37a7-4fa0-a50c-e81643492293-kube-api-access-7zldq\") pod \"cinder-operator-controller-manager-859b6ccc6-cw2wq\" (UID: \"445299d7-37a7-4fa0-a50c-e81643492293\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.055375 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-md6s8" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.060523 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgh7c\" (UniqueName: \"kubernetes.io/projected/ab444944-5290-47a9-a2ca-8c544c5350b6-kube-api-access-qgh7c\") pod \"barbican-operator-controller-manager-7d9dfd778-4m4j5\" (UID: \"ab444944-5290-47a9-a2ca-8c544c5350b6\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.065978 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.085022 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.121772 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26zc\" (UniqueName: \"kubernetes.io/projected/0f952368-5565-442c-8bcb-aa61130cb3c7-kube-api-access-m26zc\") pod \"glance-operator-controller-manager-77987cd8cd-bmn9s\" (UID: \"0f952368-5565-442c-8bcb-aa61130cb3c7\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.121870 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjssw\" (UniqueName: \"kubernetes.io/projected/4a3f5eb0-4264-4034-8c1f-4d8b53af8b21-kube-api-access-pjssw\") pod \"horizon-operator-controller-manager-68c6d99b8f-88k4v\" (UID: \"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.121928 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mncd\" (UniqueName: \"kubernetes.io/projected/61066eb5-99e6-4ec9-9dea-3d2ecd8d456e-kube-api-access-5mncd\") pod \"heat-operator-controller-manager-5f64f6f8bb-8qjgb\" (UID: \"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.121973 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z299v\" (UniqueName: \"kubernetes.io/projected/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-kube-api-access-z299v\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.122031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.122093 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qs86\" (UniqueName: \"kubernetes.io/projected/65f07af7-c89a-403d-866e-f98462398697-kube-api-access-6qs86\") pod \"designate-operator-controller-manager-78b4bc895b-2j5bq\" (UID: \"65f07af7-c89a-403d-866e-f98462398697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.139048 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.140519 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.142171 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.144966 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.148322 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.149318 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lvmq6" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.191146 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qs86\" (UniqueName: \"kubernetes.io/projected/65f07af7-c89a-403d-866e-f98462398697-kube-api-access-6qs86\") pod \"designate-operator-controller-manager-78b4bc895b-2j5bq\" (UID: \"65f07af7-c89a-403d-866e-f98462398697\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.212109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjssw\" (UniqueName: \"kubernetes.io/projected/4a3f5eb0-4264-4034-8c1f-4d8b53af8b21-kube-api-access-pjssw\") pod \"horizon-operator-controller-manager-68c6d99b8f-88k4v\" (UID: \"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.216208 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.217548 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2rjcz" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.226658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mncd\" (UniqueName: \"kubernetes.io/projected/61066eb5-99e6-4ec9-9dea-3d2ecd8d456e-kube-api-access-5mncd\") pod \"heat-operator-controller-manager-5f64f6f8bb-8qjgb\" (UID: \"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.228592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.228782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvtn\" (UniqueName: \"kubernetes.io/projected/0de81d3b-bbf7-455a-8842-2261010f69a2-kube-api-access-fpvtn\") pod \"manila-operator-controller-manager-7c79b5df47-xfpwp\" (UID: \"0de81d3b-bbf7-455a-8842-2261010f69a2\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.229033 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.229055 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9k4t\" (UniqueName: \"kubernetes.io/projected/24007279-b1cb-4d5b-aca4-c55d0cd825b7-kube-api-access-w9k4t\") pod \"keystone-operator-controller-manager-7765d96ddf-txg26\" (UID: \"24007279-b1cb-4d5b-aca4-c55d0cd825b7\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.229123 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:06.729097945 +0000 UTC m=+834.911990246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.229278 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfwn\" (UniqueName: \"kubernetes.io/projected/bb90cacb-d6f2-4e30-a694-21cccff0a5d1-kube-api-access-lkfwn\") pod \"ironic-operator-controller-manager-6c548fd776-cg4vl\" (UID: \"bb90cacb-d6f2-4e30-a694-21cccff0a5d1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.229346 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z299v\" (UniqueName: \"kubernetes.io/projected/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-kube-api-access-z299v\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.254671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26zc\" (UniqueName: \"kubernetes.io/projected/0f952368-5565-442c-8bcb-aa61130cb3c7-kube-api-access-m26zc\") pod \"glance-operator-controller-manager-77987cd8cd-bmn9s\" (UID: \"0f952368-5565-442c-8bcb-aa61130cb3c7\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.254763 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.266655 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.267297 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.268077 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.284819 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.289220 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.295506 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-j4wlq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.300428 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z299v\" (UniqueName: \"kubernetes.io/projected/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-kube-api-access-z299v\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.308871 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.309965 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.320918 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fc6hl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.330241 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.331140 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.332202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpvtn\" (UniqueName: \"kubernetes.io/projected/0de81d3b-bbf7-455a-8842-2261010f69a2-kube-api-access-fpvtn\") pod \"manila-operator-controller-manager-7c79b5df47-xfpwp\" (UID: \"0de81d3b-bbf7-455a-8842-2261010f69a2\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.332248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9k4t\" (UniqueName: \"kubernetes.io/projected/24007279-b1cb-4d5b-aca4-c55d0cd825b7-kube-api-access-w9k4t\") pod \"keystone-operator-controller-manager-7765d96ddf-txg26\" (UID: \"24007279-b1cb-4d5b-aca4-c55d0cd825b7\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.332298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfwn\" (UniqueName: \"kubernetes.io/projected/bb90cacb-d6f2-4e30-a694-21cccff0a5d1-kube-api-access-lkfwn\") pod \"ironic-operator-controller-manager-6c548fd776-cg4vl\" (UID: \"bb90cacb-d6f2-4e30-a694-21cccff0a5d1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.333015 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-x86t5" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.351303 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.370635 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfwn\" (UniqueName: \"kubernetes.io/projected/bb90cacb-d6f2-4e30-a694-21cccff0a5d1-kube-api-access-lkfwn\") pod \"ironic-operator-controller-manager-6c548fd776-cg4vl\" (UID: \"bb90cacb-d6f2-4e30-a694-21cccff0a5d1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.381573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpvtn\" (UniqueName: \"kubernetes.io/projected/0de81d3b-bbf7-455a-8842-2261010f69a2-kube-api-access-fpvtn\") pod \"manila-operator-controller-manager-7c79b5df47-xfpwp\" (UID: \"0de81d3b-bbf7-455a-8842-2261010f69a2\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.384158 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9k4t\" (UniqueName: \"kubernetes.io/projected/24007279-b1cb-4d5b-aca4-c55d0cd825b7-kube-api-access-w9k4t\") pod \"keystone-operator-controller-manager-7765d96ddf-txg26\" (UID: \"24007279-b1cb-4d5b-aca4-c55d0cd825b7\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.391860 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-dbfch"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.393492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.400383 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jmcgc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.426891 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.432303 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.438245 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4pv\" (UniqueName: \"kubernetes.io/projected/1bfd8278-80a9-41ca-a89e-400e8b62188f-kube-api-access-9z4pv\") pod \"nova-operator-controller-manager-697bc559fc-jg8fc\" (UID: \"1bfd8278-80a9-41ca-a89e-400e8b62188f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.438317 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffm9n\" (UniqueName: \"kubernetes.io/projected/352d270a-b735-411d-87ba-58719ee0f984-kube-api-access-ffm9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-72dlj\" (UID: \"352d270a-b735-411d-87ba-58719ee0f984\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.438377 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.438395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86w5\" (UniqueName: \"kubernetes.io/projected/e889c7da-b8e2-46bb-b700-f700e7e969bc-kube-api-access-j86w5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dhp69\" (UID: \"e889c7da-b8e2-46bb-b700-f700e7e969bc\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.439751 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.440410 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.442010 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pfnck" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.442990 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.448322 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2vdl4" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.448519 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.453285 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-dbfch"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.472894 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.473484 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.475010 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.475481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.476650 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.484732 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h42fk" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.493702 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.507846 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.539859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffm9n\" (UniqueName: \"kubernetes.io/projected/352d270a-b735-411d-87ba-58719ee0f984-kube-api-access-ffm9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-72dlj\" (UID: \"352d270a-b735-411d-87ba-58719ee0f984\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.539916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmlj\" (UniqueName: \"kubernetes.io/projected/2513c74a-1905-4f71-bf3c-c71095d756d3-kube-api-access-qlmlj\") pod \"ovn-operator-controller-manager-b6456fdb6-t6jdv\" (UID: \"2513c74a-1905-4f71-bf3c-c71095d756d3\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.539950 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86w5\" (UniqueName: \"kubernetes.io/projected/e889c7da-b8e2-46bb-b700-f700e7e969bc-kube-api-access-j86w5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dhp69\" (UID: \"e889c7da-b8e2-46bb-b700-f700e7e969bc\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.539981 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6grx\" (UniqueName: \"kubernetes.io/projected/77383a17-c2e3-4f54-8296-414e707e2056-kube-api-access-f6grx\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.540003 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tgj\" (UniqueName: \"kubernetes.io/projected/1c5cce87-5371-47df-8471-7725731c9908-kube-api-access-66tgj\") pod \"octavia-operator-controller-manager-998648c74-dbfch\" (UID: \"1c5cce87-5371-47df-8471-7725731c9908\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.540037 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.540075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4pv\" (UniqueName: \"kubernetes.io/projected/1bfd8278-80a9-41ca-a89e-400e8b62188f-kube-api-access-9z4pv\") pod \"nova-operator-controller-manager-697bc559fc-jg8fc\" (UID: \"1bfd8278-80a9-41ca-a89e-400e8b62188f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.580866 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.582023 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.586335 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.587642 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.609485 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7t4xc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.610612 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.610741 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.625419 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9mptr" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.633922 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.641975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmlj\" (UniqueName: \"kubernetes.io/projected/2513c74a-1905-4f71-bf3c-c71095d756d3-kube-api-access-qlmlj\") pod \"ovn-operator-controller-manager-b6456fdb6-t6jdv\" (UID: \"2513c74a-1905-4f71-bf3c-c71095d756d3\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.642045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6grx\" (UniqueName: \"kubernetes.io/projected/77383a17-c2e3-4f54-8296-414e707e2056-kube-api-access-f6grx\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.642074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tgj\" (UniqueName: \"kubernetes.io/projected/1c5cce87-5371-47df-8471-7725731c9908-kube-api-access-66tgj\") pod \"octavia-operator-controller-manager-998648c74-dbfch\" (UID: \"1c5cce87-5371-47df-8471-7725731c9908\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.642108 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.642131 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/169cc116-1edd-4af9-b992-4bdb8e912231-kube-api-access-h55vb\") pod \"swift-operator-controller-manager-5f8c65bbfc-2rqwt\" (UID: \"169cc116-1edd-4af9-b992-4bdb8e912231\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.642179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzfr\" (UniqueName: \"kubernetes.io/projected/a13d32b8-0032-4c2b-9985-f865d89becdc-kube-api-access-stzfr\") pod \"placement-operator-controller-manager-78f8948974-mkl9t\" (UID: \"a13d32b8-0032-4c2b-9985-f865d89becdc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.642408 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.642465 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:07.142445716 +0000 UTC m=+835.325338017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.645674 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffm9n\" (UniqueName: \"kubernetes.io/projected/352d270a-b735-411d-87ba-58719ee0f984-kube-api-access-ffm9n\") pod \"mariadb-operator-controller-manager-56bbcc9d85-72dlj\" (UID: \"352d270a-b735-411d-87ba-58719ee0f984\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.645846 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.660154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4pv\" (UniqueName: \"kubernetes.io/projected/1bfd8278-80a9-41ca-a89e-400e8b62188f-kube-api-access-9z4pv\") pod \"nova-operator-controller-manager-697bc559fc-jg8fc\" (UID: \"1bfd8278-80a9-41ca-a89e-400e8b62188f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.660338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86w5\" (UniqueName: \"kubernetes.io/projected/e889c7da-b8e2-46bb-b700-f700e7e969bc-kube-api-access-j86w5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dhp69\" (UID: \"e889c7da-b8e2-46bb-b700-f700e7e969bc\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.676764 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.677873 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.681798 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r69dp" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.682395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.703141 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6grx\" (UniqueName: \"kubernetes.io/projected/77383a17-c2e3-4f54-8296-414e707e2056-kube-api-access-f6grx\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.705327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmlj\" (UniqueName: \"kubernetes.io/projected/2513c74a-1905-4f71-bf3c-c71095d756d3-kube-api-access-qlmlj\") pod \"ovn-operator-controller-manager-b6456fdb6-t6jdv\" (UID: \"2513c74a-1905-4f71-bf3c-c71095d756d3\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.709760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tgj\" (UniqueName: \"kubernetes.io/projected/1c5cce87-5371-47df-8471-7725731c9908-kube-api-access-66tgj\") pod \"octavia-operator-controller-manager-998648c74-dbfch\" (UID: \"1c5cce87-5371-47df-8471-7725731c9908\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.712306 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.743232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rwg\" (UniqueName: \"kubernetes.io/projected/8e9597d6-d043-4377-9b86-cf94a5df8ddf-kube-api-access-h4rwg\") pod \"test-operator-controller-manager-5854674fcc-h5j7g\" (UID: \"8e9597d6-d043-4377-9b86-cf94a5df8ddf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.743305 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47vr\" (UniqueName: \"kubernetes.io/projected/38e8c4db-27d8-4ffa-98e8-0859bec1243c-kube-api-access-z47vr\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kfkmm\" (UID: \"38e8c4db-27d8-4ffa-98e8-0859bec1243c\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.743357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/169cc116-1edd-4af9-b992-4bdb8e912231-kube-api-access-h55vb\") pod \"swift-operator-controller-manager-5f8c65bbfc-2rqwt\" (UID: \"169cc116-1edd-4af9-b992-4bdb8e912231\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.743422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzfr\" (UniqueName: \"kubernetes.io/projected/a13d32b8-0032-4c2b-9985-f865d89becdc-kube-api-access-stzfr\") pod \"placement-operator-controller-manager-78f8948974-mkl9t\" (UID: \"a13d32b8-0032-4c2b-9985-f865d89becdc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.743466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.743703 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: E1203 09:26:06.743763 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:07.743744143 +0000 UTC m=+835.926636444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.744569 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.795796 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzfr\" (UniqueName: \"kubernetes.io/projected/a13d32b8-0032-4c2b-9985-f865d89becdc-kube-api-access-stzfr\") pod \"placement-operator-controller-manager-78f8948974-mkl9t\" (UID: \"a13d32b8-0032-4c2b-9985-f865d89becdc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.821701 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.822214 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/169cc116-1edd-4af9-b992-4bdb8e912231-kube-api-access-h55vb\") pod \"swift-operator-controller-manager-5f8c65bbfc-2rqwt\" (UID: \"169cc116-1edd-4af9-b992-4bdb8e912231\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.847314 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rwg\" (UniqueName: \"kubernetes.io/projected/8e9597d6-d043-4377-9b86-cf94a5df8ddf-kube-api-access-h4rwg\") pod \"test-operator-controller-manager-5854674fcc-h5j7g\" (UID: \"8e9597d6-d043-4377-9b86-cf94a5df8ddf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.847429 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47vr\" (UniqueName: \"kubernetes.io/projected/38e8c4db-27d8-4ffa-98e8-0859bec1243c-kube-api-access-z47vr\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kfkmm\" (UID: \"38e8c4db-27d8-4ffa-98e8-0859bec1243c\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.899684 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47vr\" (UniqueName: \"kubernetes.io/projected/38e8c4db-27d8-4ffa-98e8-0859bec1243c-kube-api-access-z47vr\") pod \"telemetry-operator-controller-manager-76cc84c6bb-kfkmm\" (UID: \"38e8c4db-27d8-4ffa-98e8-0859bec1243c\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.899756 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rwg\" (UniqueName: \"kubernetes.io/projected/8e9597d6-d043-4377-9b86-cf94a5df8ddf-kube-api-access-h4rwg\") pod \"test-operator-controller-manager-5854674fcc-h5j7g\" (UID: \"8e9597d6-d043-4377-9b86-cf94a5df8ddf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.930509 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.930552 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.931557 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.931611 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.932045 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.932188 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.932275 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.936504 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mkdds" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.936720 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.937421 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.937599 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.937705 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xpnjw" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.943467 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.949287 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.957337 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm"] Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.957663 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.963976 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tz8j2" Dec 03 09:26:06 crc kubenswrapper[4856]: I1203 09:26:06.966758 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.020052 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.036545 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.055997 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9ln\" (UniqueName: \"kubernetes.io/projected/d82e1df1-d3d5-4f54-874f-291e3d82aac6-kube-api-access-xz9ln\") pod \"watcher-operator-controller-manager-769dc69bc-nccbk\" (UID: \"d82e1df1-d3d5-4f54-874f-291e3d82aac6\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.056130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk6gd\" (UniqueName: \"kubernetes.io/projected/25e84c2c-bca6-438b-ad4c-f7154e1ba97a-kube-api-access-gk6gd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rm6rm\" (UID: \"25e84c2c-bca6-438b-ad4c-f7154e1ba97a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.056182 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mlj\" (UniqueName: \"kubernetes.io/projected/daec5857-0ffc-4499-af65-5f9d7ef6baf9-kube-api-access-j8mlj\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.056224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.056248 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.158849 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9ln\" (UniqueName: \"kubernetes.io/projected/d82e1df1-d3d5-4f54-874f-291e3d82aac6-kube-api-access-xz9ln\") pod \"watcher-operator-controller-manager-769dc69bc-nccbk\" (UID: \"d82e1df1-d3d5-4f54-874f-291e3d82aac6\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.158944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.159105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk6gd\" (UniqueName: \"kubernetes.io/projected/25e84c2c-bca6-438b-ad4c-f7154e1ba97a-kube-api-access-gk6gd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rm6rm\" (UID: \"25e84c2c-bca6-438b-ad4c-f7154e1ba97a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.159155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mlj\" (UniqueName: \"kubernetes.io/projected/daec5857-0ffc-4499-af65-5f9d7ef6baf9-kube-api-access-j8mlj\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.159202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.159237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161027 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161078 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:08.161061738 +0000 UTC m=+836.343954049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161321 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161359 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:07.661347165 +0000 UTC m=+835.844239466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161504 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.161538 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:07.66152857 +0000 UTC m=+835.844420881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.191772 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9ln\" (UniqueName: \"kubernetes.io/projected/d82e1df1-d3d5-4f54-874f-291e3d82aac6-kube-api-access-xz9ln\") pod \"watcher-operator-controller-manager-769dc69bc-nccbk\" (UID: \"d82e1df1-d3d5-4f54-874f-291e3d82aac6\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.200133 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mlj\" (UniqueName: \"kubernetes.io/projected/daec5857-0ffc-4499-af65-5f9d7ef6baf9-kube-api-access-j8mlj\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.202674 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk6gd\" (UniqueName: \"kubernetes.io/projected/25e84c2c-bca6-438b-ad4c-f7154e1ba97a-kube-api-access-gk6gd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rm6rm\" (UID: \"25e84c2c-bca6-438b-ad4c-f7154e1ba97a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.366484 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.409947 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.536724 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5"] Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.568544 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb"] Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.669683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.669745 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.669928 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.669993 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:08.669971267 +0000 UTC m=+836.852863568 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.670429 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.670466 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:08.67045382 +0000 UTC m=+836.853346121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.734220 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v"] Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.745779 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq"] Dec 03 09:26:07 crc kubenswrapper[4856]: W1203 09:26:07.753508 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3f5eb0_4264_4034_8c1f_4d8b53af8b21.slice/crio-1c7d328dc1cb362f93b83f8201b399414cb63479b32d697160548b0abdce939f WatchSource:0}: Error finding container 1c7d328dc1cb362f93b83f8201b399414cb63479b32d697160548b0abdce939f: Status 404 returned error can't find the container with id 1c7d328dc1cb362f93b83f8201b399414cb63479b32d697160548b0abdce939f Dec 03 09:26:07 crc kubenswrapper[4856]: W1203 09:26:07.758543 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod445299d7_37a7_4fa0_a50c_e81643492293.slice/crio-8872f72e88d3e37185e89e1ab10624811a207817965baba4ff8722fe358f304a WatchSource:0}: Error finding container 8872f72e88d3e37185e89e1ab10624811a207817965baba4ff8722fe358f304a: Status 404 returned error can't find the container with id 8872f72e88d3e37185e89e1ab10624811a207817965baba4ff8722fe358f304a Dec 03 09:26:07 crc kubenswrapper[4856]: I1203 09:26:07.770682 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.772010 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:07 crc kubenswrapper[4856]: E1203 09:26:07.772081 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:09.772058235 +0000 UTC m=+837.954950616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: W1203 09:26:08.145984 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169cc116_1edd_4af9_b992_4bdb8e912231.slice/crio-95bbfde84ead8ea70ed169eefbbe2a0c582d9cbf83f08e37e45e6fbb43b89d6b WatchSource:0}: Error finding container 95bbfde84ead8ea70ed169eefbbe2a0c582d9cbf83f08e37e45e6fbb43b89d6b: Status 404 returned error can't find the container with id 95bbfde84ead8ea70ed169eefbbe2a0c582d9cbf83f08e37e45e6fbb43b89d6b Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.151311 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.154897 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.181956 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-dbfch"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.190437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.190769 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.190844 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:10.190826497 +0000 UTC m=+838.373718798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.202882 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.221031 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.237190 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.244189 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.272086 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.279782 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m26zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-bmn9s_openstack-operators(0f952368-5565-442c-8bcb-aa61130cb3c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.281262 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.286499 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m26zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-bmn9s_openstack-operators(0f952368-5565-442c-8bcb-aa61130cb3c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.291593 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.297873 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9k4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-txg26_openstack-operators(24007279-b1cb-4d5b-aca4-c55d0cd825b7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.301312 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9k4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-txg26_openstack-operators(24007279-b1cb-4d5b-aca4-c55d0cd825b7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.302911 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlmlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-t6jdv_openstack-operators(2513c74a-1905-4f71-bf3c-c71095d756d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.303248 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" podUID="24007279-b1cb-4d5b-aca4-c55d0cd825b7" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.304184 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.309572 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlmlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-t6jdv_openstack-operators(2513c74a-1905-4f71-bf3c-c71095d756d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.311142 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" podUID="2513c74a-1905-4f71-bf3c-c71095d756d3" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.317092 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4rwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h5j7g_openstack-operators(8e9597d6-d043-4377-9b86-cf94a5df8ddf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.323494 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.330493 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4rwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h5j7g_openstack-operators(8e9597d6-d043-4377-9b86-cf94a5df8ddf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.331763 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:26:08 crc kubenswrapper[4856]: W1203 09:26:08.344599 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd82e1df1_d3d5_4f54_874f_291e3d82aac6.slice/crio-929301db2eecce2d73e98a452a7e20c7e005d5ddc844d71087d5e38f9d399f5b WatchSource:0}: Error finding container 929301db2eecce2d73e98a452a7e20c7e005d5ddc844d71087d5e38f9d399f5b: Status 404 returned error can't find the container with id 929301db2eecce2d73e98a452a7e20c7e005d5ddc844d71087d5e38f9d399f5b Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.344693 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.355903 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xz9ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-nccbk_openstack-operators(d82e1df1-d3d5-4f54-874f-291e3d82aac6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.358894 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xz9ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-nccbk_openstack-operators(d82e1df1-d3d5-4f54-874f-291e3d82aac6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.360931 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" podUID="d82e1df1-d3d5-4f54-874f-291e3d82aac6" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.365168 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.375149 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk"] Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.378091 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z47vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-kfkmm_openstack-operators(38e8c4db-27d8-4ffa-98e8-0859bec1243c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.378235 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gk6gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rm6rm_openstack-operators(25e84c2c-bca6-438b-ad4c-f7154e1ba97a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.379908 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podUID="25e84c2c-bca6-438b-ad4c-f7154e1ba97a" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.381504 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z47vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-kfkmm_openstack-operators(38e8c4db-27d8-4ffa-98e8-0859bec1243c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.382658 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" podUID="38e8c4db-27d8-4ffa-98e8-0859bec1243c" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.383594 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.392544 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm"] Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.604068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" event={"ID":"bb90cacb-d6f2-4e30-a694-21cccff0a5d1","Type":"ContainerStarted","Data":"a68d1ed06bebf07d37a88f16e52832abfd7da115eb134d3327359b52f9638429"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.605222 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" event={"ID":"a13d32b8-0032-4c2b-9985-f865d89becdc","Type":"ContainerStarted","Data":"ae4db262901e9ef9c98c72cb30a93925d42cb433b834fc702d0bb8a5a82dc6d6"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.606306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" event={"ID":"0de81d3b-bbf7-455a-8842-2261010f69a2","Type":"ContainerStarted","Data":"9db1fe79b1e65af41492b33a76f6b481e258b54b4d9c66f03ceb407b75056ac0"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.607124 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" event={"ID":"e889c7da-b8e2-46bb-b700-f700e7e969bc","Type":"ContainerStarted","Data":"6241c03cf008f9c8b6e822622de39f2d7370b702fc667c0a5a35fc0d33984154"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.608134 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" event={"ID":"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e","Type":"ContainerStarted","Data":"e4684400bedd244d68ba7b9ca4a0c82b869f12f1d038fc21359a5c41ac2ef8d9"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.612180 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" event={"ID":"24007279-b1cb-4d5b-aca4-c55d0cd825b7","Type":"ContainerStarted","Data":"9a47558cb00f688a232892b6696f3ff82756d7d2176c144532c20b85c67e63b7"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.613932 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" event={"ID":"1c5cce87-5371-47df-8471-7725731c9908","Type":"ContainerStarted","Data":"08de9d838285c092a4d314439af014d9f6174c6fa69e7f7df7bcc9e29a0ea8f0"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.614503 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" podUID="24007279-b1cb-4d5b-aca4-c55d0cd825b7" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.615536 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" event={"ID":"1bfd8278-80a9-41ca-a89e-400e8b62188f","Type":"ContainerStarted","Data":"01dd048bfebcbd406f39cd00e640f5b378e082fa2ef432b28a8c7e0e5d2cfbe6"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.617284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" event={"ID":"445299d7-37a7-4fa0-a50c-e81643492293","Type":"ContainerStarted","Data":"8872f72e88d3e37185e89e1ab10624811a207817965baba4ff8722fe358f304a"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.618485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" event={"ID":"2513c74a-1905-4f71-bf3c-c71095d756d3","Type":"ContainerStarted","Data":"80bfaca5079300b1a221773f60db06a04e117d11b7eb83a7c20e34915c141aef"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.624820 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" event={"ID":"352d270a-b735-411d-87ba-58719ee0f984","Type":"ContainerStarted","Data":"96d9b1737385664a546f77db4538a95dd58de578ee540b08c3da867dc6407056"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.624996 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" podUID="2513c74a-1905-4f71-bf3c-c71095d756d3" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.648099 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" event={"ID":"38e8c4db-27d8-4ffa-98e8-0859bec1243c","Type":"ContainerStarted","Data":"6bd519483badeaef625695dfd5bc14def60b695a8c8b0d04ac5908c859afd614"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.649984 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" podUID="38e8c4db-27d8-4ffa-98e8-0859bec1243c" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.652725 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" event={"ID":"ab444944-5290-47a9-a2ca-8c544c5350b6","Type":"ContainerStarted","Data":"a65e2fdbc68bc3ad49efeacb06ded26f5374c544214c9c0865716c7d95b00b9b"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.666547 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" event={"ID":"8e9597d6-d043-4377-9b86-cf94a5df8ddf","Type":"ContainerStarted","Data":"6a8c5165c418f978b9baed0cde8a528a31068829759d01c5ea7ff77b46633827"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.671610 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.673587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" event={"ID":"25e84c2c-bca6-438b-ad4c-f7154e1ba97a","Type":"ContainerStarted","Data":"8ea829e85c348e3e84ca9b7841b8e843edfe512be84441e37bb3255e0306e427"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.676280 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podUID="25e84c2c-bca6-438b-ad4c-f7154e1ba97a" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.676284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" event={"ID":"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21","Type":"ContainerStarted","Data":"1c7d328dc1cb362f93b83f8201b399414cb63479b32d697160548b0abdce939f"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.678193 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" event={"ID":"65f07af7-c89a-403d-866e-f98462398697","Type":"ContainerStarted","Data":"54a6708a1f74adaf5aeaea144bcfbb39afe0b3c8e9b0b702e0c5eafb7298db88"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.683176 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" event={"ID":"d82e1df1-d3d5-4f54-874f-291e3d82aac6","Type":"ContainerStarted","Data":"929301db2eecce2d73e98a452a7e20c7e005d5ddc844d71087d5e38f9d399f5b"} Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.688271 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" podUID="d82e1df1-d3d5-4f54-874f-291e3d82aac6" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.693768 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.706046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" event={"ID":"0f952368-5565-442c-8bcb-aa61130cb3c7","Type":"ContainerStarted","Data":"06fee4aff396e60f70a211fcf83275a2814ccb8cd6e2747a2b2afe071f2dc8ca"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.706089 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" event={"ID":"169cc116-1edd-4af9-b992-4bdb8e912231","Type":"ContainerStarted","Data":"95bbfde84ead8ea70ed169eefbbe2a0c582d9cbf83f08e37e45e6fbb43b89d6b"} Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.710599 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:08 crc kubenswrapper[4856]: I1203 09:26:08.710648 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.710758 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.710796 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:10.710784154 +0000 UTC m=+838.893676455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.711303 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:08 crc kubenswrapper[4856]: E1203 09:26:08.711363 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:10.711344668 +0000 UTC m=+838.894237039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.731025 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podUID="25e84c2c-bca6-438b-ad4c-f7154e1ba97a" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.732475 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" podUID="38e8c4db-27d8-4ffa-98e8-0859bec1243c" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.732981 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.732991 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" podUID="2513c74a-1905-4f71-bf3c-c71095d756d3" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.733177 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" podUID="d82e1df1-d3d5-4f54-874f-291e3d82aac6" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.736646 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.752627 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" podUID="24007279-b1cb-4d5b-aca4-c55d0cd825b7" Dec 03 09:26:09 crc kubenswrapper[4856]: I1203 09:26:09.832885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.833054 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:09 crc kubenswrapper[4856]: E1203 09:26:09.833126 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:13.833108764 +0000 UTC m=+842.016001065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: I1203 09:26:10.242516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.242782 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.242900 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:14.242882352 +0000 UTC m=+842.425774653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.744596 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:10 crc kubenswrapper[4856]: I1203 09:26:10.752050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:10 crc kubenswrapper[4856]: I1203 09:26:10.752110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.753004 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.753077 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:14.753047594 +0000 UTC m=+842.935939905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.753143 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:10 crc kubenswrapper[4856]: E1203 09:26:10.753173 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:14.753164767 +0000 UTC m=+842.936057068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:13 crc kubenswrapper[4856]: I1203 09:26:13.838027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:13 crc kubenswrapper[4856]: E1203 09:26:13.838202 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:13 crc kubenswrapper[4856]: E1203 09:26:13.838789 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:21.83876805 +0000 UTC m=+850.021660351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: I1203 09:26:14.246269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.246519 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.246595 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:22.246570727 +0000 UTC m=+850.429463038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: I1203 09:26:14.759045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:14 crc kubenswrapper[4856]: I1203 09:26:14.759098 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.759257 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.759350 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:22.759328664 +0000 UTC m=+850.942221025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.759420 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:14 crc kubenswrapper[4856]: E1203 09:26:14.759592 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:22.75956089 +0000 UTC m=+850.942453191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.163527 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.164890 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mncd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-8qjgb_openstack-operators(61066eb5-99e6-4ec9-9dea-3d2ecd8d456e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.821422 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.821591 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h55vb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-2rqwt_openstack-operators(169cc116-1edd-4af9-b992-4bdb8e912231): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:21 crc kubenswrapper[4856]: I1203 09:26:21.871525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.871749 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:21 crc kubenswrapper[4856]: E1203 09:26:21.871874 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert podName:d3707fa3-12a0-490e-baac-1fd0ce34fbd5 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:37.871851735 +0000 UTC m=+866.054744096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert") pod "infra-operator-controller-manager-57548d458d-x8vm4" (UID: "d3707fa3-12a0-490e-baac-1fd0ce34fbd5") : secret "infra-operator-webhook-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.277065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.277270 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.277378 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert podName:77383a17-c2e3-4f54-8296-414e707e2056 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:38.277342222 +0000 UTC m=+866.460234523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" (UID: "77383a17-c2e3-4f54-8296-414e707e2056") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.759393 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.759874 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.759941 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.760657 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.760717 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1" gracePeriod=600 Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.783224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:22 crc kubenswrapper[4856]: I1203 09:26:22.783304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.783360 4856 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.783442 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:38.783419985 +0000 UTC m=+866.966312466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "metrics-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.783558 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 09:26:22 crc kubenswrapper[4856]: E1203 09:26:22.783643 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs podName:daec5857-0ffc-4499-af65-5f9d7ef6baf9 nodeName:}" failed. No retries permitted until 2025-12-03 09:26:38.78361825 +0000 UTC m=+866.966510731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs") pod "openstack-operator-controller-manager-864649db6c-vtrz8" (UID: "daec5857-0ffc-4499-af65-5f9d7ef6baf9") : secret "webhook-server-cert" not found Dec 03 09:26:23 crc kubenswrapper[4856]: I1203 09:26:23.916837 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" event={"ID":"1c5cce87-5371-47df-8471-7725731c9908","Type":"ContainerStarted","Data":"e22135caeddae73c79ad06837511a049e5425e5239baada079650af87fc1e200"} Dec 03 09:26:23 crc kubenswrapper[4856]: I1203 09:26:23.919058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" event={"ID":"65f07af7-c89a-403d-866e-f98462398697","Type":"ContainerStarted","Data":"d90308159adf968bf3c33f8d89bcc9a13ef847434a6d0ac3a6b09639416171fc"} Dec 03 09:26:23 crc kubenswrapper[4856]: I1203 09:26:23.925656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" event={"ID":"1bfd8278-80a9-41ca-a89e-400e8b62188f","Type":"ContainerStarted","Data":"ddaba32d3c5e9eea744a6bfa895a31c676f694a12b2e563db89d02a3395271c8"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.006540 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" event={"ID":"a13d32b8-0032-4c2b-9985-f865d89becdc","Type":"ContainerStarted","Data":"9db4459617a5b7f24b77a86299020678640766b7876b89948227fb42b143a0ab"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.045684 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" event={"ID":"352d270a-b735-411d-87ba-58719ee0f984","Type":"ContainerStarted","Data":"35f0941786edb913d14b46541622e246e6d9a2b5ce0791cfcd3709812fac12e6"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.047315 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" event={"ID":"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21","Type":"ContainerStarted","Data":"d96bd3e84cb1c4055a9228cff3ce4c031e284529893f5e7b2598fed7903c9c6a"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.048535 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" event={"ID":"e889c7da-b8e2-46bb-b700-f700e7e969bc","Type":"ContainerStarted","Data":"ceab89333c3cb750261ca5f77d4315ead83264e52371ce16ceb36bae68a696f1"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.050410 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1" exitCode=0 Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.050448 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.050463 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.050478 4856 scope.go:117] "RemoveContainer" containerID="07013b96980f0676347fe6a2c164de77c3fb10edbc203fe34faa52e278993b46" Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.052990 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" event={"ID":"bb90cacb-d6f2-4e30-a694-21cccff0a5d1","Type":"ContainerStarted","Data":"b9b94e5b02fcfc910332d0c09f2fee495df7e1149e3bbaadf5c9a1a383990561"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.054096 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" event={"ID":"445299d7-37a7-4fa0-a50c-e81643492293","Type":"ContainerStarted","Data":"ad03214522b4dac33e4d20c4bd03b415045525b474aa9bcfbdb56ee89d8ef1f8"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.055079 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" event={"ID":"ab444944-5290-47a9-a2ca-8c544c5350b6","Type":"ContainerStarted","Data":"89e8208be9265777a199aa28544840d25b87b9850549482c9dfe71937f558045"} Dec 03 09:26:24 crc kubenswrapper[4856]: I1203 09:26:24.055988 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" event={"ID":"0de81d3b-bbf7-455a-8842-2261010f69a2","Type":"ContainerStarted","Data":"0a900237b66383b9ae1c608b96cd20676e8db1c6ad460147553a373a3193f8cc"} Dec 03 09:26:37 crc kubenswrapper[4856]: I1203 09:26:37.960071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:37 crc kubenswrapper[4856]: I1203 09:26:37.988319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3707fa3-12a0-490e-baac-1fd0ce34fbd5-cert\") pod \"infra-operator-controller-manager-57548d458d-x8vm4\" (UID: \"d3707fa3-12a0-490e-baac-1fd0ce34fbd5\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.136256 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-chxvc" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.144510 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.377622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.383906 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77383a17-c2e3-4f54-8296-414e707e2056-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59\" (UID: \"77383a17-c2e3-4f54-8296-414e707e2056\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.408124 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2vdl4" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.415881 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.787834 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.787900 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.792599 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-metrics-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.800263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daec5857-0ffc-4499-af65-5f9d7ef6baf9-webhook-certs\") pod \"openstack-operator-controller-manager-864649db6c-vtrz8\" (UID: \"daec5857-0ffc-4499-af65-5f9d7ef6baf9\") " pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.887785 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xpnjw" Dec 03 09:26:38 crc kubenswrapper[4856]: I1203 09:26:38.895492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:39 crc kubenswrapper[4856]: E1203 09:26:39.000675 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 03 09:26:39 crc kubenswrapper[4856]: E1203 09:26:39.001118 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m26zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-bmn9s_openstack-operators(0f952368-5565-442c-8bcb-aa61130cb3c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:40 crc kubenswrapper[4856]: E1203 09:26:40.031913 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 03 09:26:40 crc kubenswrapper[4856]: E1203 09:26:40.032231 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gk6gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rm6rm_openstack-operators(25e84c2c-bca6-438b-ad4c-f7154e1ba97a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:40 crc kubenswrapper[4856]: E1203 09:26:40.033469 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podUID="25e84c2c-bca6-438b-ad4c-f7154e1ba97a" Dec 03 09:26:51 crc kubenswrapper[4856]: E1203 09:26:51.086432 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:51 crc kubenswrapper[4856]: E1203 09:26:51.087243 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mncd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-8qjgb_openstack-operators(61066eb5-99e6-4ec9-9dea-3d2ecd8d456e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:51 crc kubenswrapper[4856]: E1203 09:26:51.088453 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" podUID="61066eb5-99e6-4ec9-9dea-3d2ecd8d456e" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.892937 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.893735 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjssw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-88k4v_openstack-operators(4a3f5eb0-4264-4034-8c1f-4d8b53af8b21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.895932 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" podUID="4a3f5eb0-4264-4034-8c1f-4d8b53af8b21" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.898676 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.899001 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qs86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-2j5bq_openstack-operators(65f07af7-c89a-403d-866e-f98462398697): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.900263 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" podUID="65f07af7-c89a-403d-866e-f98462398697" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.901267 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.901380 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ffm9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-72dlj_openstack-operators(352d270a-b735-411d-87ba-58719ee0f984): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.902436 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.902526 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" podUID="352d270a-b735-411d-87ba-58719ee0f984" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.902835 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9z4pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-jg8fc_openstack-operators(1bfd8278-80a9-41ca-a89e-400e8b62188f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.904013 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" podUID="1bfd8278-80a9-41ca-a89e-400e8b62188f" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.904065 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.904241 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stzfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-mkl9t_openstack-operators(a13d32b8-0032-4c2b-9985-f865d89becdc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.905481 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" podUID="a13d32b8-0032-4c2b-9985-f865d89becdc" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.906144 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.906217 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.906337 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j86w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-dhp69_openstack-operators(e889c7da-b8e2-46bb-b700-f700e7e969bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.906361 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgh7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-4m4j5_openstack-operators(ab444944-5290-47a9-a2ca-8c544c5350b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.907996 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" podUID="e889c7da-b8e2-46bb-b700-f700e7e969bc" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.908022 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" podUID="ab444944-5290-47a9-a2ca-8c544c5350b6" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.911841 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.912162 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4rwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h5j7g_openstack-operators(8e9597d6-d043-4377-9b86-cf94a5df8ddf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.916102 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.916382 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66tgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-dbfch_openstack-operators(1c5cce87-5371-47df-8471-7725731c9908): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.917601 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" podUID="1c5cce87-5371-47df-8471-7725731c9908" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.923996 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.924170 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkfwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-cg4vl_openstack-operators(bb90cacb-d6f2-4e30-a694-21cccff0a5d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.925519 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" podUID="bb90cacb-d6f2-4e30-a694-21cccff0a5d1" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.969559 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.969741 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zldq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-cw2wq_openstack-operators(445299d7-37a7-4fa0-a50c-e81643492293): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:53 crc kubenswrapper[4856]: E1203 09:26:53.970963 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" podUID="445299d7-37a7-4fa0-a50c-e81643492293" Dec 03 09:26:54 crc kubenswrapper[4856]: E1203 09:26:54.266546 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 09:26:54 crc kubenswrapper[4856]: E1203 09:26:54.266826 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h55vb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-2rqwt_openstack-operators(169cc116-1edd-4af9-b992-4bdb8e912231): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:26:54 crc kubenswrapper[4856]: E1203 09:26:54.268107 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" podUID="169cc116-1edd-4af9-b992-4bdb8e912231" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.394733 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.394767 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.394780 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.395360 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.396326 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.398202 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.398589 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" Dec 03 09:26:54 crc kubenswrapper[4856]: I1203 09:26:54.398964 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.730630 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" podUID="445299d7-37a7-4fa0-a50c-e81643492293" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.812844 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" podUID="65f07af7-c89a-403d-866e-f98462398697" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813057 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" podUID="1c5cce87-5371-47df-8471-7725731c9908" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813179 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" podUID="352d270a-b735-411d-87ba-58719ee0f984" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813240 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" podUID="e889c7da-b8e2-46bb-b700-f700e7e969bc" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813221 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" podUID="1bfd8278-80a9-41ca-a89e-400e8b62188f" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813366 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" podUID="ab444944-5290-47a9-a2ca-8c544c5350b6" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813392 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" podUID="bb90cacb-d6f2-4e30-a694-21cccff0a5d1" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.813493 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podUID="25e84c2c-bca6-438b-ad4c-f7154e1ba97a" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.888557 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" podUID="a13d32b8-0032-4c2b-9985-f865d89becdc" Dec 03 09:26:55 crc kubenswrapper[4856]: E1203 09:26:55.888833 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" podUID="4a3f5eb0-4264-4034-8c1f-4d8b53af8b21" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.140205 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.143517 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.149122 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.150955 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.342993 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59"] Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.413959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" event={"ID":"77383a17-c2e3-4f54-8296-414e707e2056","Type":"ContainerStarted","Data":"dd5da07144b2129021453ec60b0ffd69b82fd2146f39cf4cf4edbdf10e4fef62"} Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.475884 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.480945 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.583901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8"] Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.636127 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4"] Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.682952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.687343 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.713938 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.716602 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" Dec 03 09:26:56 crc kubenswrapper[4856]: W1203 09:26:56.772130 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaec5857_0ffc_4499_af65_5f9d7ef6baf9.slice/crio-4e1aafe6a8f4ffc29974ecae34a09310a6cc1e12498131be48141063cc44881d WatchSource:0}: Error finding container 4e1aafe6a8f4ffc29974ecae34a09310a6cc1e12498131be48141063cc44881d: Status 404 returned error can't find the container with id 4e1aafe6a8f4ffc29974ecae34a09310a6cc1e12498131be48141063cc44881d Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.945038 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:56 crc kubenswrapper[4856]: I1203 09:26:56.953439 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.428589 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" event={"ID":"a13d32b8-0032-4c2b-9985-f865d89becdc","Type":"ContainerStarted","Data":"26521d48b007c46bcbc4e28f9fad878c0734aefdd4f3c707f220138cb6584bbe"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.444045 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" event={"ID":"2513c74a-1905-4f71-bf3c-c71095d756d3","Type":"ContainerStarted","Data":"144a47b3258fd8eb2e6d924c8964816e65fbea88b4429c7f0de97dd9fe9f7dbb"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.449084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" event={"ID":"d82e1df1-d3d5-4f54-874f-291e3d82aac6","Type":"ContainerStarted","Data":"4e306db30286e43ed91889fbcec001b622c906c9ad3fc9ffef4c76113f067c72"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.453976 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-mkl9t" podStartSLOduration=37.19114692 podStartE2EDuration="51.453955035s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.23450578 +0000 UTC m=+836.417398081" lastFinishedPulling="2025-12-03 09:26:22.497313895 +0000 UTC m=+850.680206196" observedRunningTime="2025-12-03 09:26:57.452229411 +0000 UTC m=+885.635121712" watchObservedRunningTime="2025-12-03 09:26:57.453955035 +0000 UTC m=+885.636847336" Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.465208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" event={"ID":"24007279-b1cb-4d5b-aca4-c55d0cd825b7","Type":"ContainerStarted","Data":"55c5d46c65d2332acb733caec68126f031f96e998440c7925789141f75771c27"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.470020 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" event={"ID":"daec5857-0ffc-4499-af65-5f9d7ef6baf9","Type":"ContainerStarted","Data":"4e1aafe6a8f4ffc29974ecae34a09310a6cc1e12498131be48141063cc44881d"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.484454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" event={"ID":"38e8c4db-27d8-4ffa-98e8-0859bec1243c","Type":"ContainerStarted","Data":"77c8c7791e87db1f68d421809fa14fcf70d08c6ae77689b82f73d78831d51ae5"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.492030 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" event={"ID":"d3707fa3-12a0-490e-baac-1fd0ce34fbd5","Type":"ContainerStarted","Data":"cd86c89309ed03c3c31e8b4f398ef035299c4c5af81df5ce05ee587baeed05c0"} Dec 03 09:26:57 crc kubenswrapper[4856]: I1203 09:26:57.493904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" event={"ID":"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e","Type":"ContainerStarted","Data":"cc23f5d13f71b389a999970584a2621cf28bc6d836e18b9733e21d2f88b76c88"} Dec 03 09:26:58 crc kubenswrapper[4856]: E1203 09:26:58.449892 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:26:58 crc kubenswrapper[4856]: E1203 09:26:58.472090 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.514457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" event={"ID":"352d270a-b735-411d-87ba-58719ee0f984","Type":"ContainerStarted","Data":"e734c197a97298ff68e3adbdb49a05ee1b7e003c09e593be23531dea2d092847"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.590355 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" event={"ID":"8e9597d6-d043-4377-9b86-cf94a5df8ddf","Type":"ContainerStarted","Data":"226f14a662862df90f053585fcf8779b33d0caa4956c7a41d27587d9a5c79f57"} Dec 03 09:26:58 crc kubenswrapper[4856]: E1203 09:26:58.609371 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.630682 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-72dlj" podStartSLOduration=38.439847242 podStartE2EDuration="52.630663531s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.253493573 +0000 UTC m=+836.436385874" lastFinishedPulling="2025-12-03 09:26:22.444309832 +0000 UTC m=+850.627202163" observedRunningTime="2025-12-03 09:26:58.56046408 +0000 UTC m=+886.743356381" watchObservedRunningTime="2025-12-03 09:26:58.630663531 +0000 UTC m=+886.813555832" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.632742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" event={"ID":"0de81d3b-bbf7-455a-8842-2261010f69a2","Type":"ContainerStarted","Data":"ce0fdc3b874236aa3e8af2bea9309a02540f7c844067519364ea6ddd1c2b274a"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.633257 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.635186 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.677096 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-xfpwp" podStartSLOduration=5.159661603 podStartE2EDuration="53.677074646s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.254681723 +0000 UTC m=+836.437574024" lastFinishedPulling="2025-12-03 09:26:56.772094766 +0000 UTC m=+884.954987067" observedRunningTime="2025-12-03 09:26:58.66742581 +0000 UTC m=+886.850318111" watchObservedRunningTime="2025-12-03 09:26:58.677074646 +0000 UTC m=+886.859966947" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.731186 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" event={"ID":"169cc116-1edd-4af9-b992-4bdb8e912231","Type":"ContainerStarted","Data":"df44b3a9b069e644d9f97fa2d56e5025513f44068b10d0dec01425c3dba4c1e3"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.740918 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" event={"ID":"65f07af7-c89a-403d-866e-f98462398697","Type":"ContainerStarted","Data":"4744b3f5d2828d4e583ef682a94e202ffca2f3ece78997b28319bfbc755cce68"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.759151 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" event={"ID":"1bfd8278-80a9-41ca-a89e-400e8b62188f","Type":"ContainerStarted","Data":"be9a9fdf4cb7554b64e7f78eecf1286f027b078611276815c87e0f64c6a9eeeb"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.767643 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2j5bq" podStartSLOduration=39.577166376 podStartE2EDuration="53.767623566s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.255086124 +0000 UTC m=+836.437978425" lastFinishedPulling="2025-12-03 09:26:22.445543314 +0000 UTC m=+850.628435615" observedRunningTime="2025-12-03 09:26:58.765583194 +0000 UTC m=+886.948475495" watchObservedRunningTime="2025-12-03 09:26:58.767623566 +0000 UTC m=+886.950515867" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.795760 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jg8fc" podStartSLOduration=38.482825829 podStartE2EDuration="52.795727843s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.255118335 +0000 UTC m=+836.438010636" lastFinishedPulling="2025-12-03 09:26:22.568020339 +0000 UTC m=+850.750912650" observedRunningTime="2025-12-03 09:26:58.794376809 +0000 UTC m=+886.977269110" watchObservedRunningTime="2025-12-03 09:26:58.795727843 +0000 UTC m=+886.978620144" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.800384 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" event={"ID":"0f952368-5565-442c-8bcb-aa61130cb3c7","Type":"ContainerStarted","Data":"4a1fef64b5160b2d76fec748859393ab14198bb122cc6390837088cada2c5dc1"} Dec 03 09:26:58 crc kubenswrapper[4856]: E1203 09:26:58.819049 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podUID="0f952368-5565-442c-8bcb-aa61130cb3c7" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.831370 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" event={"ID":"4a3f5eb0-4264-4034-8c1f-4d8b53af8b21","Type":"ContainerStarted","Data":"2eaaddf5d664a224e1458243cb65d0f54886779e11cd463ae1c855abd5921d22"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.835786 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" event={"ID":"ab444944-5290-47a9-a2ca-8c544c5350b6","Type":"ContainerStarted","Data":"5d44e1c1b465edd3cffd7bd54d14e575ee07c4f34582db98c4fbc8620d62d515"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.880103 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" event={"ID":"daec5857-0ffc-4499-af65-5f9d7ef6baf9","Type":"ContainerStarted","Data":"f8af74ddae7ea71f739c27f414589fc13354cdaea14c7ff5a62fd587d51aa938"} Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.880531 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:26:58 crc kubenswrapper[4856]: I1203 09:26:58.905490 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4m4j5" podStartSLOduration=38.983497797 podStartE2EDuration="53.905471503s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:07.590539547 +0000 UTC m=+835.773431848" lastFinishedPulling="2025-12-03 09:26:22.512513253 +0000 UTC m=+850.695405554" observedRunningTime="2025-12-03 09:26:58.890378428 +0000 UTC m=+887.073270729" watchObservedRunningTime="2025-12-03 09:26:58.905471503 +0000 UTC m=+887.088363804" Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:58.955744 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-88k4v" podStartSLOduration=39.266928801 podStartE2EDuration="53.955727126s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:07.756729818 +0000 UTC m=+835.939622119" lastFinishedPulling="2025-12-03 09:26:22.445528143 +0000 UTC m=+850.628420444" observedRunningTime="2025-12-03 09:26:58.954483314 +0000 UTC m=+887.137375615" watchObservedRunningTime="2025-12-03 09:26:58.955727126 +0000 UTC m=+887.138619427" Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.044083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" event={"ID":"1c5cce87-5371-47df-8471-7725731c9908","Type":"ContainerStarted","Data":"5cde33b2d4a0858e7bdf2a4e659cad16162f8a69e708af25a75373aaf784f6e9"} Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.083407 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" podStartSLOduration=53.083387163 podStartE2EDuration="53.083387163s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:26:59.081513456 +0000 UTC m=+887.264405757" watchObservedRunningTime="2025-12-03 09:26:59.083387163 +0000 UTC m=+887.266279464" Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.104051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" event={"ID":"e889c7da-b8e2-46bb-b700-f700e7e969bc","Type":"ContainerStarted","Data":"6959c56577ccba451a8ce5a68d9ac8a3336bb4cf8736906e37192a24d76c61b9"} Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.150295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" event={"ID":"bb90cacb-d6f2-4e30-a694-21cccff0a5d1","Type":"ContainerStarted","Data":"c0cef8a042fdbd03a21ad0db35dcabe029bc90469b906a5e66169e024f5f400d"} Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.316464 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dhp69" podStartSLOduration=39.027543707 podStartE2EDuration="53.31643585s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.156682522 +0000 UTC m=+836.339574823" lastFinishedPulling="2025-12-03 09:26:22.445574665 +0000 UTC m=+850.628466966" observedRunningTime="2025-12-03 09:26:59.293661369 +0000 UTC m=+887.476553680" watchObservedRunningTime="2025-12-03 09:26:59.31643585 +0000 UTC m=+887.499328151" Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.316644 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-dbfch" podStartSLOduration=39.083247231 podStartE2EDuration="53.316635515s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.212070208 +0000 UTC m=+836.394962519" lastFinishedPulling="2025-12-03 09:26:22.445458462 +0000 UTC m=+850.628350803" observedRunningTime="2025-12-03 09:26:59.122279166 +0000 UTC m=+887.305171477" watchObservedRunningTime="2025-12-03 09:26:59.316635515 +0000 UTC m=+887.499527816" Dec 03 09:26:59 crc kubenswrapper[4856]: I1203 09:26:59.353919 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-cg4vl" podStartSLOduration=40.096375876 podStartE2EDuration="54.353891896s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.235096015 +0000 UTC m=+836.417988316" lastFinishedPulling="2025-12-03 09:26:22.492612035 +0000 UTC m=+850.675504336" observedRunningTime="2025-12-03 09:26:59.339381476 +0000 UTC m=+887.522273777" watchObservedRunningTime="2025-12-03 09:26:59.353891896 +0000 UTC m=+887.536784347" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.498980 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" event={"ID":"d82e1df1-d3d5-4f54-874f-291e3d82aac6","Type":"ContainerStarted","Data":"388a4477e8c1e4f877bc60bdbba46181c455e58552d073898450010806f57498"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.500568 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.507630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" event={"ID":"24007279-b1cb-4d5b-aca4-c55d0cd825b7","Type":"ContainerStarted","Data":"bc38112ca4fcd8b9f139368622bdff9e2a0791e3332c7ff118ecdeaac2807d50"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.507865 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.523058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" event={"ID":"169cc116-1edd-4af9-b992-4bdb8e912231","Type":"ContainerStarted","Data":"0b4b96c0e49e78263261b067bdb6a7875ade88791ee87c216d988889123db51e"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.524032 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.546003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" event={"ID":"38e8c4db-27d8-4ffa-98e8-0859bec1243c","Type":"ContainerStarted","Data":"0b52c00514eddd9e6b64acf438143532516a4188931165b7b1a81fa927c1af2b"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.546975 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.561217 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" event={"ID":"61066eb5-99e6-4ec9-9dea-3d2ecd8d456e","Type":"ContainerStarted","Data":"abf713607c44904df5b4f82283545dd55e7e7e5f5d8bef2d6b6765168fda550b"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.561561 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.576682 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" event={"ID":"445299d7-37a7-4fa0-a50c-e81643492293","Type":"ContainerStarted","Data":"fb294a864f9ce703e3fd9a38f2aec8ef8395f489bf4cae5852800099a77ce03f"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.623105 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" podStartSLOduration=7.080617398 podStartE2EDuration="54.623080021s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.377413617 +0000 UTC m=+836.560305918" lastFinishedPulling="2025-12-03 09:26:55.91987624 +0000 UTC m=+884.102768541" observedRunningTime="2025-12-03 09:27:00.622702482 +0000 UTC m=+888.805594793" watchObservedRunningTime="2025-12-03 09:27:00.623080021 +0000 UTC m=+888.805972322" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.629024 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" podStartSLOduration=9.097221067 podStartE2EDuration="54.629001792s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.355592411 +0000 UTC m=+836.538484712" lastFinishedPulling="2025-12-03 09:26:53.887373126 +0000 UTC m=+882.070265437" observedRunningTime="2025-12-03 09:27:00.558635897 +0000 UTC m=+888.741528198" watchObservedRunningTime="2025-12-03 09:27:00.629001792 +0000 UTC m=+888.811894093" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.632612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" event={"ID":"2513c74a-1905-4f71-bf3c-c71095d756d3","Type":"ContainerStarted","Data":"53b0fd6ba0904ed823f8ee7453006ceb073e356b7d7af9130d70104b53d85eab"} Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.636102 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.663877 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" podStartSLOduration=6.00591828 podStartE2EDuration="54.663846672s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.149365102 +0000 UTC m=+836.332257403" lastFinishedPulling="2025-12-03 09:26:56.807293494 +0000 UTC m=+884.990185795" observedRunningTime="2025-12-03 09:27:00.662471047 +0000 UTC m=+888.845363358" watchObservedRunningTime="2025-12-03 09:27:00.663846672 +0000 UTC m=+888.846738973" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.691750 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" podStartSLOduration=8.06946728 podStartE2EDuration="55.691725163s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.297633128 +0000 UTC m=+836.480525429" lastFinishedPulling="2025-12-03 09:26:55.919891011 +0000 UTC m=+884.102783312" observedRunningTime="2025-12-03 09:27:00.686307945 +0000 UTC m=+888.869200266" watchObservedRunningTime="2025-12-03 09:27:00.691725163 +0000 UTC m=+888.874617464" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.745728 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" podStartSLOduration=7.130164031 podStartE2EDuration="54.74569822s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.302703579 +0000 UTC m=+836.485595880" lastFinishedPulling="2025-12-03 09:26:55.918237768 +0000 UTC m=+884.101130069" observedRunningTime="2025-12-03 09:27:00.744564101 +0000 UTC m=+888.927456402" watchObservedRunningTime="2025-12-03 09:27:00.74569822 +0000 UTC m=+888.928590521" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.835924 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" podStartSLOduration=7.050652055 podStartE2EDuration="55.835881661s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:07.590146287 +0000 UTC m=+835.773038588" lastFinishedPulling="2025-12-03 09:26:56.375375893 +0000 UTC m=+884.558268194" observedRunningTime="2025-12-03 09:27:00.796366683 +0000 UTC m=+888.979258984" watchObservedRunningTime="2025-12-03 09:27:00.835881661 +0000 UTC m=+889.018773962" Dec 03 09:27:00 crc kubenswrapper[4856]: I1203 09:27:00.838974 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-cw2wq" podStartSLOduration=41.090628516 podStartE2EDuration="55.838902078s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:07.761771718 +0000 UTC m=+835.944664019" lastFinishedPulling="2025-12-03 09:26:22.51004528 +0000 UTC m=+850.692937581" observedRunningTime="2025-12-03 09:27:00.828489283 +0000 UTC m=+889.011381594" watchObservedRunningTime="2025-12-03 09:27:00.838902078 +0000 UTC m=+889.021794379" Dec 03 09:27:01 crc kubenswrapper[4856]: I1203 09:27:01.671360 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-nccbk" Dec 03 09:27:01 crc kubenswrapper[4856]: I1203 09:27:01.671721 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-txg26" Dec 03 09:27:02 crc kubenswrapper[4856]: I1203 09:27:02.674829 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-kfkmm" Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.698195 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.698769 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.698784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" event={"ID":"77383a17-c2e3-4f54-8296-414e707e2056","Type":"ContainerStarted","Data":"f481ded7cd379b7bddbffd78c4bf6c6b538ffdb93c5b88837313ec763dd4c21b"} Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.698980 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" event={"ID":"77383a17-c2e3-4f54-8296-414e707e2056","Type":"ContainerStarted","Data":"fc20d63a92a8547a73da58a9b643f20b979f94de5c871c600dc11751ef75e931"} Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.698995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" event={"ID":"d3707fa3-12a0-490e-baac-1fd0ce34fbd5","Type":"ContainerStarted","Data":"d370915cb2f8b5ac69bf557b3936c7bf279040b99a671dc9057d08eb4b65527f"} Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.699006 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" event={"ID":"d3707fa3-12a0-490e-baac-1fd0ce34fbd5","Type":"ContainerStarted","Data":"9e4a0dba12ea73d25066b31fae7c87120db765f7394651f17a509c80d8fa7b6d"} Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.727199 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" podStartSLOduration=51.061474691 podStartE2EDuration="58.727178256s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:56.371477514 +0000 UTC m=+884.554369815" lastFinishedPulling="2025-12-03 09:27:04.037181079 +0000 UTC m=+892.220073380" observedRunningTime="2025-12-03 09:27:04.720207198 +0000 UTC m=+892.903099509" watchObservedRunningTime="2025-12-03 09:27:04.727178256 +0000 UTC m=+892.910070557" Dec 03 09:27:04 crc kubenswrapper[4856]: I1203 09:27:04.751226 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" podStartSLOduration=52.49645971 podStartE2EDuration="59.751191169s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:56.784843081 +0000 UTC m=+884.967735382" lastFinishedPulling="2025-12-03 09:27:04.03957454 +0000 UTC m=+892.222466841" observedRunningTime="2025-12-03 09:27:04.744264612 +0000 UTC m=+892.927156913" watchObservedRunningTime="2025-12-03 09:27:04.751191169 +0000 UTC m=+892.934083470" Dec 03 09:27:06 crc kubenswrapper[4856]: I1203 09:27:06.259504 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-8qjgb" Dec 03 09:27:06 crc kubenswrapper[4856]: I1203 09:27:06.825338 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-t6jdv" Dec 03 09:27:06 crc kubenswrapper[4856]: I1203 09:27:06.971952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-2rqwt" Dec 03 09:27:08 crc kubenswrapper[4856]: I1203 09:27:08.903347 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-864649db6c-vtrz8" Dec 03 09:27:10 crc kubenswrapper[4856]: I1203 09:27:10.691375 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:27:11 crc kubenswrapper[4856]: I1203 09:27:11.745822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" event={"ID":"25e84c2c-bca6-438b-ad4c-f7154e1ba97a","Type":"ContainerStarted","Data":"18f7f58c6dee35b319a2247b77190b296c74978a9ef8247fff168b585b19af8a"} Dec 03 09:27:11 crc kubenswrapper[4856]: I1203 09:27:11.763357 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rm6rm" podStartSLOduration=3.02968969 podStartE2EDuration="1m5.763330627s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.378054093 +0000 UTC m=+836.560946394" lastFinishedPulling="2025-12-03 09:27:11.11169503 +0000 UTC m=+899.294587331" observedRunningTime="2025-12-03 09:27:11.75872919 +0000 UTC m=+899.941621491" watchObservedRunningTime="2025-12-03 09:27:11.763330627 +0000 UTC m=+899.946222938" Dec 03 09:27:12 crc kubenswrapper[4856]: E1203 09:27:12.698312 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podUID="8e9597d6-d043-4377-9b86-cf94a5df8ddf" Dec 03 09:27:13 crc kubenswrapper[4856]: I1203 09:27:13.772577 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" event={"ID":"0f952368-5565-442c-8bcb-aa61130cb3c7","Type":"ContainerStarted","Data":"8c0e37d3a15758d5d8585b606f0b73c59e1fd32a8d1c2ace57dcc97404c86c98"} Dec 03 09:27:13 crc kubenswrapper[4856]: I1203 09:27:13.772875 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:27:13 crc kubenswrapper[4856]: I1203 09:27:13.791543 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" podStartSLOduration=4.046239873 podStartE2EDuration="1m8.791524802s" podCreationTimestamp="2025-12-03 09:26:05 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.27960168 +0000 UTC m=+836.462493981" lastFinishedPulling="2025-12-03 09:27:13.024886609 +0000 UTC m=+901.207778910" observedRunningTime="2025-12-03 09:27:13.789497 +0000 UTC m=+901.972389311" watchObservedRunningTime="2025-12-03 09:27:13.791524802 +0000 UTC m=+901.974417103" Dec 03 09:27:18 crc kubenswrapper[4856]: I1203 09:27:18.155164 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-x8vm4" Dec 03 09:27:18 crc kubenswrapper[4856]: I1203 09:27:18.421191 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59" Dec 03 09:27:26 crc kubenswrapper[4856]: I1203 09:27:26.498613 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bmn9s" Dec 03 09:27:28 crc kubenswrapper[4856]: I1203 09:27:28.949580 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" event={"ID":"8e9597d6-d043-4377-9b86-cf94a5df8ddf","Type":"ContainerStarted","Data":"826a9ee9f58f0e762e7abdf9427d1aef383ffb5b9f87c9a652d3f614a50be45e"} Dec 03 09:27:28 crc kubenswrapper[4856]: I1203 09:27:28.950335 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:27:28 crc kubenswrapper[4856]: I1203 09:27:28.968486 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" podStartSLOduration=3.05693668 podStartE2EDuration="1m22.968465382s" podCreationTimestamp="2025-12-03 09:26:06 +0000 UTC" firstStartedPulling="2025-12-03 09:26:08.316875527 +0000 UTC m=+836.499767848" lastFinishedPulling="2025-12-03 09:27:28.228404249 +0000 UTC m=+916.411296550" observedRunningTime="2025-12-03 09:27:28.967181749 +0000 UTC m=+917.150074050" watchObservedRunningTime="2025-12-03 09:27:28.968465382 +0000 UTC m=+917.151357693" Dec 03 09:27:37 crc kubenswrapper[4856]: I1203 09:27:37.039221 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h5j7g" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.236303 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.245551 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.249712 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.249910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ns5lg" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.250066 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.250361 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.261165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.379063 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.382277 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.387014 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.394962 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.433561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.434408 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9b5\" (UniqueName: \"kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.537064 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.537726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.537914 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.538042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.538189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9b5\" (UniqueName: \"kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.539267 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.564397 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9b5\" (UniqueName: \"kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5\") pod \"dnsmasq-dns-675f4bcbfc-9pz5k\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.577110 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.639871 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.640689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.640799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.642012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.642968 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.658004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr\") pod \"dnsmasq-dns-78dd6ddcc-x2jgx\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:54 crc kubenswrapper[4856]: I1203 09:27:54.707249 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:27:55 crc kubenswrapper[4856]: I1203 09:27:55.490935 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:27:55 crc kubenswrapper[4856]: I1203 09:27:55.598336 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:27:56 crc kubenswrapper[4856]: I1203 09:27:56.435114 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" event={"ID":"592a255b-4b48-40f8-8f11-8ad3294ef3eb","Type":"ContainerStarted","Data":"5e2767de49de77a46de23d75adf701417ad50515dc2c1724ff164172edecbe63"} Dec 03 09:27:56 crc kubenswrapper[4856]: I1203 09:27:56.445133 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" event={"ID":"a529ba9d-1bb1-4c55-bef4-c93faaa4a454","Type":"ContainerStarted","Data":"bef2985119367360f5bb7c90681b9c1689a9f40abca0f5d60df0ca8ec5710cdb"} Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.366150 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.388554 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.390481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.414242 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.523462 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lb4g\" (UniqueName: \"kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.523538 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.523581 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.625838 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lb4g\" (UniqueName: \"kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.625913 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.625981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.627080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.628216 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.668931 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lb4g\" (UniqueName: \"kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g\") pod \"dnsmasq-dns-666b6646f7-zqxgb\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.842176 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.918135 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.932255 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:27:57 crc kubenswrapper[4856]: I1203 09:27:57.997082 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.032773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfgd2\" (UniqueName: \"kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.032876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.032938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.033761 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.188085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.188313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfgd2\" (UniqueName: \"kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.188380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.189892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.195521 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.294231 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfgd2\" (UniqueName: \"kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2\") pod \"dnsmasq-dns-57d769cc4f-rqk5m\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.532143 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.675598 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.677662 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.687511 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.688561 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.689108 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.689234 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.689435 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-f58dm" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.690157 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.690912 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.881667 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933397 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933499 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933534 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933590 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933840 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckt2\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933866 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.933891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.934060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:58 crc kubenswrapper[4856]: I1203 09:27:58.934082 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.035057 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ckt2\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037786 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037917 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.037975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.038009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.038029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.038051 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.038092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.039378 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.040243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.040672 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.041320 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.042075 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.044142 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.061116 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.062045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.070794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.083336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ckt2\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.084894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.105414 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.203711 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.341208 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.344850 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.350208 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.350526 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.350796 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.351009 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.351158 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.351355 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lvkpt" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.351460 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.352424 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449116 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449186 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449226 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689sv\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449402 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449439 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.449631 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551486 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551513 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689sv\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551571 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551631 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551662 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551698 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.551783 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.552882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.554189 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.554452 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.555301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.555753 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.557221 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.562422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.563602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.569360 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.574517 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.580138 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.581531 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.582492 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689sv\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv\") pod \"rabbitmq-cell1-server-0\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.621755 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.632881 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" event={"ID":"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927","Type":"ContainerStarted","Data":"de91caadc2b210e64543cdc57899a5efbdbc2c056840914dbabb4cec3834ac27"} Dec 03 09:27:59 crc kubenswrapper[4856]: W1203 09:27:59.633222 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5d35d2a_2149_47d1_a385_d4ff0f058904.slice/crio-bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0 WatchSource:0}: Error finding container bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0: Status 404 returned error can't find the container with id bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0 Dec 03 09:27:59 crc kubenswrapper[4856]: W1203 09:27:59.650531 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e71e20_1329_46fc_b544_39febc69ae60.slice/crio-febdf524de1bec8493dbcb0162ff75943cef72486f8546ae4f8b63e500b5d9c5 WatchSource:0}: Error finding container febdf524de1bec8493dbcb0162ff75943cef72486f8546ae4f8b63e500b5d9c5: Status 404 returned error can't find the container with id febdf524de1bec8493dbcb0162ff75943cef72486f8546ae4f8b63e500b5d9c5 Dec 03 09:27:59 crc kubenswrapper[4856]: I1203 09:27:59.669539 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.095862 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.098321 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.102876 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.107349 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.107657 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.108876 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vbbfh" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.126272 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.128066 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.174196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.174569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.174749 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-default\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.175157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ln7n\" (UniqueName: \"kubernetes.io/projected/643c316d-09f1-4aee-8d49-34989baaa50e-kube-api-access-8ln7n\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.175202 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.175505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.176187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-kolla-config\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.176294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.296740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ln7n\" (UniqueName: \"kubernetes.io/projected/643c316d-09f1-4aee-8d49-34989baaa50e-kube-api-access-8ln7n\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.296939 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297001 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-kolla-config\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297194 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.297236 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-default\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.299289 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.299801 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-kolla-config\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.300121 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-default\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.299760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/643c316d-09f1-4aee-8d49-34989baaa50e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.300597 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/643c316d-09f1-4aee-8d49-34989baaa50e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.359685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.458306 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643c316d-09f1-4aee-8d49-34989baaa50e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.488456 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ln7n\" (UniqueName: \"kubernetes.io/projected/643c316d-09f1-4aee-8d49-34989baaa50e-kube-api-access-8ln7n\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.533684 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.539276 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"643c316d-09f1-4aee-8d49-34989baaa50e\") " pod="openstack/openstack-galera-0" Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.718571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerStarted","Data":"febdf524de1bec8493dbcb0162ff75943cef72486f8546ae4f8b63e500b5d9c5"} Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.718630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerStarted","Data":"d547d93f22e80fe193efe3630484425cea8cd39d2e382b91231e59472f4a6304"} Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.718643 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" event={"ID":"d5d35d2a-2149-47d1-a385-d4ff0f058904","Type":"ContainerStarted","Data":"bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0"} Dec 03 09:28:00 crc kubenswrapper[4856]: I1203 09:28:00.754846 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.628019 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.632650 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.646689 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vvczx" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.646743 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.647219 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.647267 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.654413 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.664384 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.809981 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.811712 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.820204 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821458 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821535 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821642 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821712 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821764 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.821847 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jff7h\" (UniqueName: \"kubernetes.io/projected/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kube-api-access-jff7h\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.822232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.824828 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rwlfs" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.825164 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.858019 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.927605 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.927703 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.927737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-kolla-config\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.927788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jff7h\" (UniqueName: \"kubernetes.io/projected/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kube-api-access-jff7h\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.927880 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.928425 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-config-data\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.928657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.928715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cph\" (UniqueName: \"kubernetes.io/projected/5025473d-5c66-4550-90f1-5e4988fcbd9e-kube-api-access-z8cph\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.928898 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.928963 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.929025 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.929100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.929155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.930401 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.931539 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.931607 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.932866 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.944948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.945857 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.956545 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:01 crc kubenswrapper[4856]: I1203 09:28:01.969478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jff7h\" (UniqueName: \"kubernetes.io/projected/e8b16ba2-5c26-4b5e-85ab-d99d915b68d0-kube-api-access-jff7h\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.010639 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0\") " pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.033272 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.033340 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-kolla-config\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.033376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-config-data\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.033398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.033421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cph\" (UniqueName: \"kubernetes.io/projected/5025473d-5c66-4550-90f1-5e4988fcbd9e-kube-api-access-z8cph\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.039769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-kolla-config\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.040048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5025473d-5c66-4550-90f1-5e4988fcbd9e-config-data\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.052979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.065085 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5025473d-5c66-4550-90f1-5e4988fcbd9e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.176544 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cph\" (UniqueName: \"kubernetes.io/projected/5025473d-5c66-4550-90f1-5e4988fcbd9e-kube-api-access-z8cph\") pod \"memcached-0\" (UID: \"5025473d-5c66-4550-90f1-5e4988fcbd9e\") " pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.269244 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.563196 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 09:28:02 crc kubenswrapper[4856]: I1203 09:28:02.796576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"643c316d-09f1-4aee-8d49-34989baaa50e","Type":"ContainerStarted","Data":"583e067c746f3d6640d6e96a1e01f5dd59202703e85649e51d0adf5504950785"} Dec 03 09:28:03 crc kubenswrapper[4856]: I1203 09:28:03.212209 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.881738 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.895948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.897496 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.902109 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wdlrn" Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.949913 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48mp\" (UniqueName: \"kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp\") pod \"kube-state-metrics-0\" (UID: \"6707abf6-3ddf-4cf5-91d7-10a6a229d274\") " pod="openstack/kube-state-metrics-0" Dec 03 09:28:04 crc kubenswrapper[4856]: I1203 09:28:04.959083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0","Type":"ContainerStarted","Data":"1bc27aaa2b34b373d13923ba16192ae9a5fd0624f7ad12bd847e410881cf3abb"} Dec 03 09:28:05 crc kubenswrapper[4856]: I1203 09:28:05.113065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48mp\" (UniqueName: \"kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp\") pod \"kube-state-metrics-0\" (UID: \"6707abf6-3ddf-4cf5-91d7-10a6a229d274\") " pod="openstack/kube-state-metrics-0" Dec 03 09:28:05 crc kubenswrapper[4856]: I1203 09:28:05.375498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48mp\" (UniqueName: \"kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp\") pod \"kube-state-metrics-0\" (UID: \"6707abf6-3ddf-4cf5-91d7-10a6a229d274\") " pod="openstack/kube-state-metrics-0" Dec 03 09:28:05 crc kubenswrapper[4856]: I1203 09:28:05.546745 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:28:05 crc kubenswrapper[4856]: I1203 09:28:05.754833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 09:28:05 crc kubenswrapper[4856]: W1203 09:28:05.870045 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5025473d_5c66_4550_90f1_5e4988fcbd9e.slice/crio-6a5095e19193b873d15fa4fdd65e2530145b8b87b1a22433917960fa2ea3d103 WatchSource:0}: Error finding container 6a5095e19193b873d15fa4fdd65e2530145b8b87b1a22433917960fa2ea3d103: Status 404 returned error can't find the container with id 6a5095e19193b873d15fa4fdd65e2530145b8b87b1a22433917960fa2ea3d103 Dec 03 09:28:06 crc kubenswrapper[4856]: I1203 09:28:06.036625 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5025473d-5c66-4550-90f1-5e4988fcbd9e","Type":"ContainerStarted","Data":"6a5095e19193b873d15fa4fdd65e2530145b8b87b1a22433917960fa2ea3d103"} Dec 03 09:28:06 crc kubenswrapper[4856]: I1203 09:28:06.240338 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:28:06 crc kubenswrapper[4856]: W1203 09:28:06.261647 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6707abf6_3ddf_4cf5_91d7_10a6a229d274.slice/crio-7750c1c5572e8f1b158cabca002605595a27230df28293432603cb0a6a8986cb WatchSource:0}: Error finding container 7750c1c5572e8f1b158cabca002605595a27230df28293432603cb0a6a8986cb: Status 404 returned error can't find the container with id 7750c1c5572e8f1b158cabca002605595a27230df28293432603cb0a6a8986cb Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.029392 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.031266 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.039298 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.039552 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.040316 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.040627 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8vv7c" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.040881 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.040977 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.168009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6707abf6-3ddf-4cf5-91d7-10a6a229d274","Type":"ContainerStarted","Data":"7750c1c5572e8f1b158cabca002605595a27230df28293432603cb0a6a8986cb"} Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215101 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215277 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215325 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215348 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5gp\" (UniqueName: \"kubernetes.io/projected/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-kube-api-access-bc5gp\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.215422 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316734 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316826 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316864 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.316980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5gp\" (UniqueName: \"kubernetes.io/projected/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-kube-api-access-bc5gp\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.317018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.317066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.318467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.318649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.319421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.319159 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.328612 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.332167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.333636 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.343847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5gp\" (UniqueName: \"kubernetes.io/projected/5596d8aa-639a-4e4f-8905-ceb3cbb622cd-kube-api-access-bc5gp\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.386508 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7tm2h"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.388160 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.395150 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.395398 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xclmj" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.396467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5596d8aa-639a-4e4f-8905-ceb3cbb622cd\") " pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.400768 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.418300 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.435982 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.476613 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-g4lq4"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.479878 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.492344 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g4lq4"] Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.525907 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.525974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-log-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.526067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-ovn-controller-tls-certs\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.526107 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmr4\" (UniqueName: \"kubernetes.io/projected/da1b289d-32ea-4bbb-a203-d208e0267f9b-kube-api-access-mzmr4\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.526141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-combined-ca-bundle\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.526171 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da1b289d-32ea-4bbb-a203-d208e0267f9b-scripts\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.526261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769600 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-ovn-controller-tls-certs\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-etc-ovs\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-lib\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmr4\" (UniqueName: \"kubernetes.io/projected/da1b289d-32ea-4bbb-a203-d208e0267f9b-kube-api-access-mzmr4\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-run\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769797 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-combined-ca-bundle\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769840 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da1b289d-32ea-4bbb-a203-d208e0267f9b-scripts\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769867 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khbp\" (UniqueName: \"kubernetes.io/projected/5816b942-fa92-48fe-a44a-279e02ae7c91-kube-api-access-6khbp\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769896 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-log\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769961 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.769985 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-log-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.770006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5816b942-fa92-48fe-a44a-279e02ae7c91-scripts\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.771011 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-log-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.771170 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.771281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da1b289d-32ea-4bbb-a203-d208e0267f9b-var-run-ovn\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.773895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da1b289d-32ea-4bbb-a203-d208e0267f9b-scripts\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.806903 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-ovn-controller-tls-certs\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.809090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1b289d-32ea-4bbb-a203-d208e0267f9b-combined-ca-bundle\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.822118 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmr4\" (UniqueName: \"kubernetes.io/projected/da1b289d-32ea-4bbb-a203-d208e0267f9b-kube-api-access-mzmr4\") pod \"ovn-controller-7tm2h\" (UID: \"da1b289d-32ea-4bbb-a203-d208e0267f9b\") " pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872246 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-etc-ovs\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-lib\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872359 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-run\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872397 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khbp\" (UniqueName: \"kubernetes.io/projected/5816b942-fa92-48fe-a44a-279e02ae7c91-kube-api-access-6khbp\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872457 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-log\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5816b942-fa92-48fe-a44a-279e02ae7c91-scripts\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-run\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.872956 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-lib\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.873323 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-var-log\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.873709 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5816b942-fa92-48fe-a44a-279e02ae7c91-etc-ovs\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.877241 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5816b942-fa92-48fe-a44a-279e02ae7c91-scripts\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:07 crc kubenswrapper[4856]: I1203 09:28:07.895993 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khbp\" (UniqueName: \"kubernetes.io/projected/5816b942-fa92-48fe-a44a-279e02ae7c91-kube-api-access-6khbp\") pod \"ovn-controller-ovs-g4lq4\" (UID: \"5816b942-fa92-48fe-a44a-279e02ae7c91\") " pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:08 crc kubenswrapper[4856]: I1203 09:28:08.079580 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:08 crc kubenswrapper[4856]: I1203 09:28:08.101826 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.085487 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.088214 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.093052 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6g47q" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.093442 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.115209 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.115559 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.126985 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.281339 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282045 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fpk\" (UniqueName: \"kubernetes.io/projected/9dcb6c56-c540-463d-a481-0de5eb693e2b-kube-api-access-j8fpk\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282159 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282309 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.282546 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385121 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8fpk\" (UniqueName: \"kubernetes.io/projected/9dcb6c56-c540-463d-a481-0de5eb693e2b-kube-api-access-j8fpk\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385139 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385193 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385213 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.385386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.386356 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.387222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcb6c56-c540-463d-a481-0de5eb693e2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.388537 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.389159 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.393944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.397936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.398770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcb6c56-c540-463d-a481-0de5eb693e2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.417467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.423549 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8fpk\" (UniqueName: \"kubernetes.io/projected/9dcb6c56-c540-463d-a481-0de5eb693e2b-kube-api-access-j8fpk\") pod \"ovsdbserver-sb-0\" (UID: \"9dcb6c56-c540-463d-a481-0de5eb693e2b\") " pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:11 crc kubenswrapper[4856]: I1203 09:28:11.452197 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.811650 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.815932 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.867574 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.925727 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.925876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:18 crc kubenswrapper[4856]: I1203 09:28:18.925969 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmjt\" (UniqueName: \"kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.063618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmjt\" (UniqueName: \"kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.063785 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.063875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.064565 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.064639 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.108283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmjt\" (UniqueName: \"kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt\") pod \"community-operators-kgqs6\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:19 crc kubenswrapper[4856]: I1203 09:28:19.364171 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:25 crc kubenswrapper[4856]: E1203 09:28:25.984395 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 03 09:28:25 crc kubenswrapper[4856]: E1203 09:28:25.984929 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ln7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(643c316d-09f1-4aee-8d49-34989baaa50e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:25 crc kubenswrapper[4856]: E1203 09:28:25.986547 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="643c316d-09f1-4aee-8d49-34989baaa50e" Dec 03 09:28:26 crc kubenswrapper[4856]: E1203 09:28:26.544705 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="643c316d-09f1-4aee-8d49-34989baaa50e" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.491157 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.491783 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ckt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(16e71e20-1329-46fc-b544-39febc69ae60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.493217 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="16e71e20-1329-46fc-b544-39febc69ae60" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.534353 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.534644 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jff7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(e8b16ba2-5c26-4b5e-85ab-d99d915b68d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.536736 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="e8b16ba2-5c26-4b5e-85ab-d99d915b68d0" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.544318 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.544602 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-689sv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(86bc6e23-9abf-4b9e-97bd-2f8e29a294bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.545891 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.556697 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="16e71e20-1329-46fc-b544-39febc69ae60" Dec 03 09:28:27 crc kubenswrapper[4856]: E1203 09:28:27.557450 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="e8b16ba2-5c26-4b5e-85ab-d99d915b68d0" Dec 03 09:28:28 crc kubenswrapper[4856]: E1203 09:28:28.570473 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" Dec 03 09:28:35 crc kubenswrapper[4856]: I1203 09:28:35.401340 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 09:28:35 crc kubenswrapper[4856]: E1203 09:28:35.983342 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 09:28:35 crc kubenswrapper[4856]: E1203 09:28:35.983622 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfgd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rqk5m_openstack(d5d35d2a-2149-47d1-a385-d4ff0f058904): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:35 crc kubenswrapper[4856]: E1203 09:28:35.985121 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" podUID="d5d35d2a-2149-47d1-a385-d4ff0f058904" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.093891 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.094658 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lb4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zqxgb_openstack(d1688a3e-c65f-4e44-aa48-b8dc9d8ad927): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.096036 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" podUID="d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.101257 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.101469 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvtnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x2jgx_openstack(a529ba9d-1bb1-4c55-bef4-c93faaa4a454): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.102668 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" podUID="a529ba9d-1bb1-4c55-bef4-c93faaa4a454" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.125110 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.125356 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9pz5k_openstack(592a255b-4b48-40f8-8f11-8ad3294ef3eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.127192 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" podUID="592a255b-4b48-40f8-8f11-8ad3294ef3eb" Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.418385 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.617356 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h"] Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.662303 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5596d8aa-639a-4e4f-8905-ceb3cbb622cd","Type":"ContainerStarted","Data":"8109db34f96ee88cd60d9a10e076d2478a888093dbd101d7e1a4ccb511273031"} Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.664227 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9dcb6c56-c540-463d-a481-0de5eb693e2b","Type":"ContainerStarted","Data":"c33550f6fba5364774a65d4fefbbe6acc11f808b8b215700e26058c4d3d359a0"} Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.669445 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" podUID="d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" Dec 03 09:28:36 crc kubenswrapper[4856]: E1203 09:28:36.674047 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" podUID="d5d35d2a-2149-47d1-a385-d4ff0f058904" Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.827777 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:36 crc kubenswrapper[4856]: I1203 09:28:36.852111 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g4lq4"] Dec 03 09:28:37 crc kubenswrapper[4856]: E1203 09:28:37.461714 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 03 09:28:37 crc kubenswrapper[4856]: E1203 09:28:37.462133 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 03 09:28:37 crc kubenswrapper[4856]: E1203 09:28:37.462363 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k48mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6707abf6-3ddf-4cf5-91d7-10a6a229d274): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:28:37 crc kubenswrapper[4856]: E1203 09:28:37.463878 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.683552 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.686243 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" event={"ID":"a529ba9d-1bb1-4c55-bef4-c93faaa4a454","Type":"ContainerDied","Data":"bef2985119367360f5bb7c90681b9c1689a9f40abca0f5d60df0ca8ec5710cdb"} Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.686307 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef2985119367360f5bb7c90681b9c1689a9f40abca0f5d60df0ca8ec5710cdb" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.689855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h" event={"ID":"da1b289d-32ea-4bbb-a203-d208e0267f9b","Type":"ContainerStarted","Data":"1310a38beb7ce90cb418a86089b47190588ebe0536fb1289951b1ed5953715a1"} Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.692095 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g4lq4" event={"ID":"5816b942-fa92-48fe-a44a-279e02ae7c91","Type":"ContainerStarted","Data":"17a5296ddf4f17922f469392d374ff0aa8abd06cf9d98e93545b264639eedccc"} Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.694213 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.694257 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9pz5k" event={"ID":"592a255b-4b48-40f8-8f11-8ad3294ef3eb","Type":"ContainerDied","Data":"5e2767de49de77a46de23d75adf701417ad50515dc2c1724ff164172edecbe63"} Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.706925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerStarted","Data":"d599f2ba4c46bff038ae016e91d92cab58cf8a1535f26df9767ac0a10dff5574"} Dec 03 09:28:37 crc kubenswrapper[4856]: E1203 09:28:37.708579 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.717792 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.748098 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config\") pod \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.748322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc\") pod \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.748368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9b5\" (UniqueName: \"kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5\") pod \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\" (UID: \"592a255b-4b48-40f8-8f11-8ad3294ef3eb\") " Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.748420 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config\") pod \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.748441 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr\") pod \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\" (UID: \"a529ba9d-1bb1-4c55-bef4-c93faaa4a454\") " Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.751291 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config" (OuterVolumeSpecName: "config") pod "592a255b-4b48-40f8-8f11-8ad3294ef3eb" (UID: "592a255b-4b48-40f8-8f11-8ad3294ef3eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.752981 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a529ba9d-1bb1-4c55-bef4-c93faaa4a454" (UID: "a529ba9d-1bb1-4c55-bef4-c93faaa4a454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.754117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config" (OuterVolumeSpecName: "config") pod "a529ba9d-1bb1-4c55-bef4-c93faaa4a454" (UID: "a529ba9d-1bb1-4c55-bef4-c93faaa4a454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.762328 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr" (OuterVolumeSpecName: "kube-api-access-kvtnr") pod "a529ba9d-1bb1-4c55-bef4-c93faaa4a454" (UID: "a529ba9d-1bb1-4c55-bef4-c93faaa4a454"). InnerVolumeSpecName "kube-api-access-kvtnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.764614 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5" (OuterVolumeSpecName: "kube-api-access-2z9b5") pod "592a255b-4b48-40f8-8f11-8ad3294ef3eb" (UID: "592a255b-4b48-40f8-8f11-8ad3294ef3eb"). InnerVolumeSpecName "kube-api-access-2z9b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.851227 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.853420 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9b5\" (UniqueName: \"kubernetes.io/projected/592a255b-4b48-40f8-8f11-8ad3294ef3eb-kube-api-access-2z9b5\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.853455 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.853477 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/a529ba9d-1bb1-4c55-bef4-c93faaa4a454-kube-api-access-kvtnr\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:37 crc kubenswrapper[4856]: I1203 09:28:37.853501 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a255b-4b48-40f8-8f11-8ad3294ef3eb-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.083533 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.093871 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9pz5k"] Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.704177 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592a255b-4b48-40f8-8f11-8ad3294ef3eb" path="/var/lib/kubelet/pods/592a255b-4b48-40f8-8f11-8ad3294ef3eb/volumes" Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.726533 4856 generic.go:334] "Generic (PLEG): container finished" podID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerID="0f01583809383180b43b60f603612ef9cdf61660f668578490541b7c2346d6eb" exitCode=0 Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.726682 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerDied","Data":"0f01583809383180b43b60f603612ef9cdf61660f668578490541b7c2346d6eb"} Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.729072 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5025473d-5c66-4550-90f1-5e4988fcbd9e","Type":"ContainerStarted","Data":"97ad40c979d81957a8d414b4b294ec2e3eacc8b93544ec24ebf6c1e45b294891"} Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.729180 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x2jgx" Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.729241 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.786063 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.893687387 podStartE2EDuration="37.786019924s" podCreationTimestamp="2025-12-03 09:28:01 +0000 UTC" firstStartedPulling="2025-12-03 09:28:05.874379213 +0000 UTC m=+954.057271514" lastFinishedPulling="2025-12-03 09:28:34.76671172 +0000 UTC m=+982.949604051" observedRunningTime="2025-12-03 09:28:38.776112515 +0000 UTC m=+986.959004826" watchObservedRunningTime="2025-12-03 09:28:38.786019924 +0000 UTC m=+986.968912225" Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.840841 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:28:38 crc kubenswrapper[4856]: I1203 09:28:38.858140 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x2jgx"] Dec 03 09:28:40 crc kubenswrapper[4856]: I1203 09:28:40.706729 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a529ba9d-1bb1-4c55-bef4-c93faaa4a454" path="/var/lib/kubelet/pods/a529ba9d-1bb1-4c55-bef4-c93faaa4a454/volumes" Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.763454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g4lq4" event={"ID":"5816b942-fa92-48fe-a44a-279e02ae7c91","Type":"ContainerStarted","Data":"f83952e239f39e2bbde5f80452af3a068bcf609fd1745ea43fad3b0f1167fab8"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.765683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h" event={"ID":"da1b289d-32ea-4bbb-a203-d208e0267f9b","Type":"ContainerStarted","Data":"597718cec2b9b7d53c4e809e653dae615966f268c93ef974e5c3afb64f5cb682"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.765941 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7tm2h" Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.768068 4856 generic.go:334] "Generic (PLEG): container finished" podID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerID="41e29efcf7a3d937c48b1858de01bfbc09afc1e848befa5ee21a318f69b60c8c" exitCode=0 Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.768146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerDied","Data":"41e29efcf7a3d937c48b1858de01bfbc09afc1e848befa5ee21a318f69b60c8c"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.773688 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"643c316d-09f1-4aee-8d49-34989baaa50e","Type":"ContainerStarted","Data":"4c55e87e264dd4256aa70e523025001da9af618eeb278a96cf1bcf39508a730f"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.776011 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5596d8aa-639a-4e4f-8905-ceb3cbb622cd","Type":"ContainerStarted","Data":"66052675c3fb9a13c70800eadf465dcb4539d41cd3b47c7dbd0eae435e06dd5a"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.777943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9dcb6c56-c540-463d-a481-0de5eb693e2b","Type":"ContainerStarted","Data":"4f8d278a1083a6f95123d4a89c6498d66904a592dc4439ebfa05cc7a036cf852"} Dec 03 09:28:41 crc kubenswrapper[4856]: I1203 09:28:41.820175 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7tm2h" podStartSLOduration=31.138256665 podStartE2EDuration="34.820150719s" podCreationTimestamp="2025-12-03 09:28:07 +0000 UTC" firstStartedPulling="2025-12-03 09:28:36.960000297 +0000 UTC m=+985.142892598" lastFinishedPulling="2025-12-03 09:28:40.641894351 +0000 UTC m=+988.824786652" observedRunningTime="2025-12-03 09:28:41.818736643 +0000 UTC m=+990.001628944" watchObservedRunningTime="2025-12-03 09:28:41.820150719 +0000 UTC m=+990.003043020" Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.565797 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.818624 4856 generic.go:334] "Generic (PLEG): container finished" podID="5816b942-fa92-48fe-a44a-279e02ae7c91" containerID="f83952e239f39e2bbde5f80452af3a068bcf609fd1745ea43fad3b0f1167fab8" exitCode=0 Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.818711 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g4lq4" event={"ID":"5816b942-fa92-48fe-a44a-279e02ae7c91","Type":"ContainerDied","Data":"f83952e239f39e2bbde5f80452af3a068bcf609fd1745ea43fad3b0f1167fab8"} Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.847515 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerStarted","Data":"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132"} Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.870655 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerStarted","Data":"a91269e93928ae670344e835a674cee9dda7bf30b39ffff27cbd8820993acd64"} Dec 03 09:28:42 crc kubenswrapper[4856]: I1203 09:28:42.930782 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kgqs6" podStartSLOduration=22.395261496 podStartE2EDuration="24.930760534s" podCreationTimestamp="2025-12-03 09:28:18 +0000 UTC" firstStartedPulling="2025-12-03 09:28:39.793206677 +0000 UTC m=+987.976098978" lastFinishedPulling="2025-12-03 09:28:42.328705715 +0000 UTC m=+990.511598016" observedRunningTime="2025-12-03 09:28:42.928514268 +0000 UTC m=+991.111406569" watchObservedRunningTime="2025-12-03 09:28:42.930760534 +0000 UTC m=+991.113652835" Dec 03 09:28:43 crc kubenswrapper[4856]: I1203 09:28:43.948861 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g4lq4" event={"ID":"5816b942-fa92-48fe-a44a-279e02ae7c91","Type":"ContainerStarted","Data":"95407ed7bd2f4dd5ea5afad66b8961648774d53227849bdaaa331626c3c32229"} Dec 03 09:28:43 crc kubenswrapper[4856]: I1203 09:28:43.949756 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g4lq4" event={"ID":"5816b942-fa92-48fe-a44a-279e02ae7c91","Type":"ContainerStarted","Data":"28451937ac2bb018c477e3850c1035bf06394283d2ae4210d5453d7c921cd727"} Dec 03 09:28:43 crc kubenswrapper[4856]: I1203 09:28:43.964580 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0","Type":"ContainerStarted","Data":"ccb6a3207c82d0e96408910cf9be645ffabb830eb04b33c983f86ac1bf5180f6"} Dec 03 09:28:43 crc kubenswrapper[4856]: I1203 09:28:43.979630 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-g4lq4" podStartSLOduration=33.875459679 podStartE2EDuration="36.979607345s" podCreationTimestamp="2025-12-03 09:28:07 +0000 UTC" firstStartedPulling="2025-12-03 09:28:37.493905322 +0000 UTC m=+985.676797633" lastFinishedPulling="2025-12-03 09:28:40.598052998 +0000 UTC m=+988.780945299" observedRunningTime="2025-12-03 09:28:43.978206809 +0000 UTC m=+992.161099130" watchObservedRunningTime="2025-12-03 09:28:43.979607345 +0000 UTC m=+992.162499646" Dec 03 09:28:44 crc kubenswrapper[4856]: I1203 09:28:44.972380 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:44 crc kubenswrapper[4856]: I1203 09:28:44.972955 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.294309 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.355821 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.357243 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.364065 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.364430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbm4\" (UniqueName: \"kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.364582 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.389968 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.465655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.465745 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrbm4\" (UniqueName: \"kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.465777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.466842 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.466944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.511685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrbm4\" (UniqueName: \"kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4\") pod \"dnsmasq-dns-7cb5889db5-nnvvc\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.692439 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:45 crc kubenswrapper[4856]: I1203 09:28:45.987360 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerStarted","Data":"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3"} Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.375095 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.381878 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.386356 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.386578 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.387425 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.387647 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-p99cr" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.397308 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.488963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.489105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.489187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-lock\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.489244 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xxzc\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-kube-api-access-9xxzc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.489282 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-cache\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.590678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.590741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-lock\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.590777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xxzc\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-kube-api-access-9xxzc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.590890 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-cache\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.590945 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: E1203 09:28:46.591127 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:28:46 crc kubenswrapper[4856]: E1203 09:28:46.591142 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:28:46 crc kubenswrapper[4856]: E1203 09:28:46.591195 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:28:47.091171918 +0000 UTC m=+995.274064219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.591312 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.593122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-cache\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.593709 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-lock\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.621730 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.637787 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xxzc\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-kube-api-access-9xxzc\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.971781 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-frh7v"] Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.974186 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.977239 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.977262 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.977295 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 03 09:28:46 crc kubenswrapper[4856]: I1203 09:28:46.984236 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-frh7v"] Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.049567 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.049925 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.049967 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.049990 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.050023 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdpj\" (UniqueName: \"kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.050075 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.050094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.152829 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.152885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.152923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.152982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.153002 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.153029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.153049 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.153083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdpj\" (UniqueName: \"kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.153932 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: E1203 09:28:47.154076 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:28:47 crc kubenswrapper[4856]: E1203 09:28:47.154116 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:28:47 crc kubenswrapper[4856]: E1203 09:28:47.154206 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:28:48.154177294 +0000 UTC m=+996.337069595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.154540 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.170424 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.171499 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.171619 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.172151 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.179769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdpj\" (UniqueName: \"kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj\") pod \"swift-ring-rebalance-frh7v\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:47 crc kubenswrapper[4856]: I1203 09:28:47.311682 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.087533 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" event={"ID":"d5d35d2a-2149-47d1-a385-d4ff0f058904","Type":"ContainerDied","Data":"bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0"} Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.088103 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf4873500be407d3d107a8660b5895feb3ce223a077e18ef45fd0cb0313374e0" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.143728 4856 generic.go:334] "Generic (PLEG): container finished" podID="643c316d-09f1-4aee-8d49-34989baaa50e" containerID="4c55e87e264dd4256aa70e523025001da9af618eeb278a96cf1bcf39508a730f" exitCode=0 Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.144346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"643c316d-09f1-4aee-8d49-34989baaa50e","Type":"ContainerDied","Data":"4c55e87e264dd4256aa70e523025001da9af618eeb278a96cf1bcf39508a730f"} Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.167866 4856 generic.go:334] "Generic (PLEG): container finished" podID="e8b16ba2-5c26-4b5e-85ab-d99d915b68d0" containerID="ccb6a3207c82d0e96408910cf9be645ffabb830eb04b33c983f86ac1bf5180f6" exitCode=0 Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.167936 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0","Type":"ContainerDied","Data":"ccb6a3207c82d0e96408910cf9be645ffabb830eb04b33c983f86ac1bf5180f6"} Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.187527 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.188694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:48 crc kubenswrapper[4856]: E1203 09:28:48.189510 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:28:48 crc kubenswrapper[4856]: E1203 09:28:48.195888 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:28:48 crc kubenswrapper[4856]: E1203 09:28:48.196383 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:28:50.196340046 +0000 UTC m=+998.379232347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.290743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfgd2\" (UniqueName: \"kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2\") pod \"d5d35d2a-2149-47d1-a385-d4ff0f058904\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.291412 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc\") pod \"d5d35d2a-2149-47d1-a385-d4ff0f058904\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.291523 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config\") pod \"d5d35d2a-2149-47d1-a385-d4ff0f058904\" (UID: \"d5d35d2a-2149-47d1-a385-d4ff0f058904\") " Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.292797 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5d35d2a-2149-47d1-a385-d4ff0f058904" (UID: "d5d35d2a-2149-47d1-a385-d4ff0f058904"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.293627 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config" (OuterVolumeSpecName: "config") pod "d5d35d2a-2149-47d1-a385-d4ff0f058904" (UID: "d5d35d2a-2149-47d1-a385-d4ff0f058904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.296057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2" (OuterVolumeSpecName: "kube-api-access-sfgd2") pod "d5d35d2a-2149-47d1-a385-d4ff0f058904" (UID: "d5d35d2a-2149-47d1-a385-d4ff0f058904"). InnerVolumeSpecName "kube-api-access-sfgd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.396500 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfgd2\" (UniqueName: \"kubernetes.io/projected/d5d35d2a-2149-47d1-a385-d4ff0f058904-kube-api-access-sfgd2\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.396537 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.396548 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d35d2a-2149-47d1-a385-d4ff0f058904-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.591497 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:48 crc kubenswrapper[4856]: I1203 09:28:48.675077 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-frh7v"] Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.190930 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9dcb6c56-c540-463d-a481-0de5eb693e2b","Type":"ContainerStarted","Data":"63bf44ffafb4e61a457c71d881fcbb6d8ad6cb41cded9e36e0669332bd488a5c"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.193870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"643c316d-09f1-4aee-8d49-34989baaa50e","Type":"ContainerStarted","Data":"d35e2af66a76b16530b2b0feac0c8024d9eec902a926527d44eb665528989adf"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.196692 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" event={"ID":"3884e6c0-b02a-48d4-8796-b118f45a4990","Type":"ContainerStarted","Data":"76ba6d9b1329a25751e5c40d84109c7d1a8761c8c5d4e67a6920b0aa7c99bba3"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.201159 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e8b16ba2-5c26-4b5e-85ab-d99d915b68d0","Type":"ContainerStarted","Data":"e2e39cc82a1a5b83153f06342750cbfedb46dec213a98f2f2121c1ab6a2f538f"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.203073 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-frh7v" event={"ID":"320b56b0-4905-4a31-bc37-13106b993909","Type":"ContainerStarted","Data":"3016783037aa8ee4a986db1bd663f71f7552848a4ad47530646aabcb687a6207"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.204897 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rqk5m" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.205084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5596d8aa-639a-4e4f-8905-ceb3cbb622cd","Type":"ContainerStarted","Data":"69c93e623f0052c448dd9968a5321f825d3fff4d8b3a1fe74ff7a52a492a2a9b"} Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.253467 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.530266428 podStartE2EDuration="39.253431245s" podCreationTimestamp="2025-12-03 09:28:10 +0000 UTC" firstStartedPulling="2025-12-03 09:28:36.0543244 +0000 UTC m=+984.237216701" lastFinishedPulling="2025-12-03 09:28:47.777489217 +0000 UTC m=+995.960381518" observedRunningTime="2025-12-03 09:28:49.239146055 +0000 UTC m=+997.422038356" watchObservedRunningTime="2025-12-03 09:28:49.253431245 +0000 UTC m=+997.436323546" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.268457 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.022739759 podStartE2EDuration="44.268436072s" podCreationTimestamp="2025-12-03 09:28:05 +0000 UTC" firstStartedPulling="2025-12-03 09:28:36.542715649 +0000 UTC m=+984.725607940" lastFinishedPulling="2025-12-03 09:28:47.788411952 +0000 UTC m=+995.971304253" observedRunningTime="2025-12-03 09:28:49.263313924 +0000 UTC m=+997.446206235" watchObservedRunningTime="2025-12-03 09:28:49.268436072 +0000 UTC m=+997.451328373" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.292165 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.761476621 podStartE2EDuration="50.292135639s" podCreationTimestamp="2025-12-03 09:27:59 +0000 UTC" firstStartedPulling="2025-12-03 09:28:01.763539006 +0000 UTC m=+949.946431297" lastFinishedPulling="2025-12-03 09:28:41.294198014 +0000 UTC m=+989.477090315" observedRunningTime="2025-12-03 09:28:49.291618796 +0000 UTC m=+997.474511107" watchObservedRunningTime="2025-12-03 09:28:49.292135639 +0000 UTC m=+997.475027940" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.343260 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371987.511536 podStartE2EDuration="49.343240085s" podCreationTimestamp="2025-12-03 09:28:00 +0000 UTC" firstStartedPulling="2025-12-03 09:28:04.099581886 +0000 UTC m=+952.282474187" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:28:49.324480083 +0000 UTC m=+997.507372384" watchObservedRunningTime="2025-12-03 09:28:49.343240085 +0000 UTC m=+997.526132386" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.365220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.365271 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.400750 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.405889 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rqk5m"] Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.437352 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.450996 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:49 crc kubenswrapper[4856]: I1203 09:28:49.502799 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.213369 4856 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 03 09:28:50 crc kubenswrapper[4856]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 03 09:28:50 crc kubenswrapper[4856]: > podSandboxID="de91caadc2b210e64543cdc57899a5efbdbc2c056840914dbabb4cec3834ac27" Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.213874 4856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 03 09:28:50 crc kubenswrapper[4856]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lb4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zqxgb_openstack(d1688a3e-c65f-4e44-aa48-b8dc9d8ad927): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 03 09:28:50 crc kubenswrapper[4856]: > logger="UnhandledError" Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.215148 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" podUID="d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.224783 4856 generic.go:334] "Generic (PLEG): container finished" podID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerID="100b52a7cbf25616183859d2b066d2249b684ffd3828b7d7a855aa9c63d33c43" exitCode=0 Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.224993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" event={"ID":"3884e6c0-b02a-48d4-8796-b118f45a4990","Type":"ContainerDied","Data":"100b52a7cbf25616183859d2b066d2249b684ffd3828b7d7a855aa9c63d33c43"} Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.225613 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.271261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.271540 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.271583 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:28:50 crc kubenswrapper[4856]: E1203 09:28:50.271669 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:28:54.271642645 +0000 UTC m=+1002.454534946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.301841 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.302043 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.539890 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.618250 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.667609 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.729258 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d35d2a-2149-47d1-a385-d4ff0f058904" path="/var/lib/kubelet/pods/d5d35d2a-2149-47d1-a385-d4ff0f058904/volumes" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.729882 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wcs7x"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.736083 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.736255 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.739999 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.748773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.748856 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-combined-ca-bundle\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.748998 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7djr\" (UniqueName: \"kubernetes.io/projected/9a73fb24-fb9d-4037-b540-fcadcd423024-kube-api-access-s7djr\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.749029 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovs-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.749048 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovn-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.749088 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a73fb24-fb9d-4037-b540-fcadcd423024-config\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.755068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wcs7x"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.758567 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.758598 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.762154 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.775747 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.782926 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.849485 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.850812 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.850893 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-combined-ca-bundle\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.850980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7djr\" (UniqueName: \"kubernetes.io/projected/9a73fb24-fb9d-4037-b540-fcadcd423024-kube-api-access-s7djr\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.851008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovs-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.851032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovn-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.851350 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a73fb24-fb9d-4037-b540-fcadcd423024-config\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.851646 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovn-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.851687 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a73fb24-fb9d-4037-b540-fcadcd423024-ovs-rundir\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.852301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a73fb24-fb9d-4037-b540-fcadcd423024-config\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.889618 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.894595 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a73fb24-fb9d-4037-b540-fcadcd423024-combined-ca-bundle\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.905376 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7djr\" (UniqueName: \"kubernetes.io/projected/9a73fb24-fb9d-4037-b540-fcadcd423024-kube-api-access-s7djr\") pod \"ovn-controller-metrics-wcs7x\" (UID: \"9a73fb24-fb9d-4037-b540-fcadcd423024\") " pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.953593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.953677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.953719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s44f\" (UniqueName: \"kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:50 crc kubenswrapper[4856]: I1203 09:28:50.953760 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.017722 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.055837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.055930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.055994 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s44f\" (UniqueName: \"kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.056040 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.057690 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.060120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.060746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.086660 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wcs7x" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.099019 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.113734 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s44f\" (UniqueName: \"kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f\") pod \"dnsmasq-dns-57d65f699f-mpgsg\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.113980 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.129973 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.130111 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.146887 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.158525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.158621 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.158662 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.158692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2n6\" (UniqueName: \"kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.158729 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.262448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.262521 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.262549 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.262574 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2n6\" (UniqueName: \"kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.262603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.263707 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.265571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.266563 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.266661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.303940 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2n6\" (UniqueName: \"kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6\") pod \"dnsmasq-dns-b8fbc5445-5wd7d\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.316392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" event={"ID":"3884e6c0-b02a-48d4-8796-b118f45a4990","Type":"ContainerStarted","Data":"7623c1bf7880458259e64305d327d4efcac7aad6029b9a9b3b1a19a502cf5312"} Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.317158 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.324654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6707abf6-3ddf-4cf5-91d7-10a6a229d274","Type":"ContainerStarted","Data":"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45"} Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.325875 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.326440 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.358704 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" podStartSLOduration=5.831190114 podStartE2EDuration="6.358677507s" podCreationTimestamp="2025-12-03 09:28:45 +0000 UTC" firstStartedPulling="2025-12-03 09:28:48.60029006 +0000 UTC m=+996.783182351" lastFinishedPulling="2025-12-03 09:28:49.127777443 +0000 UTC m=+997.310669744" observedRunningTime="2025-12-03 09:28:51.357496487 +0000 UTC m=+999.540388808" watchObservedRunningTime="2025-12-03 09:28:51.358677507 +0000 UTC m=+999.541569808" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.441354 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.390289535 podStartE2EDuration="47.441318126s" podCreationTimestamp="2025-12-03 09:28:04 +0000 UTC" firstStartedPulling="2025-12-03 09:28:06.266340515 +0000 UTC m=+954.449232816" lastFinishedPulling="2025-12-03 09:28:50.317369106 +0000 UTC m=+998.500261407" observedRunningTime="2025-12-03 09:28:51.434944966 +0000 UTC m=+999.617837277" watchObservedRunningTime="2025-12-03 09:28:51.441318126 +0000 UTC m=+999.624210427" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.490814 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.593489 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.882472 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.885210 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.892617 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.894479 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nw7f8" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.894621 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.894668 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.898226 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.924052 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wcs7x"] Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979061 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np46l\" (UniqueName: \"kubernetes.io/projected/b9109180-5009-4b2b-b2ff-b56e90bf72aa-kube-api-access-np46l\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979336 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-scripts\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979373 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979436 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:51 crc kubenswrapper[4856]: I1203 09:28:51.979637 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-config\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-scripts\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082636 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082698 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-config\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.082949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np46l\" (UniqueName: \"kubernetes.io/projected/b9109180-5009-4b2b-b2ff-b56e90bf72aa-kube-api-access-np46l\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.084274 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-scripts\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.085573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.085576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9109180-5009-4b2b-b2ff-b56e90bf72aa-config\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.091921 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.092650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.118130 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9109180-5009-4b2b-b2ff-b56e90bf72aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.133736 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np46l\" (UniqueName: \"kubernetes.io/projected/b9109180-5009-4b2b-b2ff-b56e90bf72aa-kube-api-access-np46l\") pod \"ovn-northd-0\" (UID: \"b9109180-5009-4b2b-b2ff-b56e90bf72aa\") " pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.141168 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.206190 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.270306 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.270653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.271552 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.286276 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lb4g\" (UniqueName: \"kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g\") pod \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.286399 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config\") pod \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.286652 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc\") pod \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\" (UID: \"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927\") " Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.294606 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g" (OuterVolumeSpecName: "kube-api-access-4lb4g") pod "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" (UID: "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927"). InnerVolumeSpecName "kube-api-access-4lb4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.313676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config" (OuterVolumeSpecName: "config") pod "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" (UID: "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.313813 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" (UID: "d1688a3e-c65f-4e44-aa48-b8dc9d8ad927"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.363669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" event={"ID":"f5eee554-aa95-4e25-8bf7-852b84febadf","Type":"ContainerStarted","Data":"4c70f22246e9bb52a1b5834268776abecc06604543e19dec2a0c1f3b021d405b"} Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.366661 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wcs7x" event={"ID":"9a73fb24-fb9d-4037-b540-fcadcd423024","Type":"ContainerStarted","Data":"4a20a15676d6da92872510142ea0ef87581e39495989997bae4baf795493501d"} Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.369212 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" event={"ID":"d1688a3e-c65f-4e44-aa48-b8dc9d8ad927","Type":"ContainerDied","Data":"de91caadc2b210e64543cdc57899a5efbdbc2c056840914dbabb4cec3834ac27"} Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.369317 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zqxgb" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.370526 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="dnsmasq-dns" containerID="cri-o://7623c1bf7880458259e64305d327d4efcac7aad6029b9a9b3b1a19a502cf5312" gracePeriod=10 Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.372286 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kgqs6" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="registry-server" containerID="cri-o://a91269e93928ae670344e835a674cee9dda7bf30b39ffff27cbd8820993acd64" gracePeriod=2 Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.389656 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lb4g\" (UniqueName: \"kubernetes.io/projected/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-kube-api-access-4lb4g\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.390182 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.390195 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.467366 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.518885 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zqxgb"] Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.555213 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.761247 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.761924 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.775461 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1688a3e-c65f-4e44-aa48-b8dc9d8ad927" path="/var/lib/kubelet/pods/d1688a3e-c65f-4e44-aa48-b8dc9d8ad927/volumes" Dec 03 09:28:52 crc kubenswrapper[4856]: I1203 09:28:52.942982 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.315348 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.329567 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.354432 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.406315 4856 generic.go:334] "Generic (PLEG): container finished" podID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerID="b84a888297f5d9ba6e714b848fcfdac9a4cebe65c4d0f4d263441d1a4b08fdc4" exitCode=0 Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.406941 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" event={"ID":"996c5c67-0f32-4d87-b31b-35e36f3d7b67","Type":"ContainerDied","Data":"b84a888297f5d9ba6e714b848fcfdac9a4cebe65c4d0f4d263441d1a4b08fdc4"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.407053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" event={"ID":"996c5c67-0f32-4d87-b31b-35e36f3d7b67","Type":"ContainerStarted","Data":"abf45dd0592f241a037985932f138b7b219ba7e9f9f1fb2907f8ed7b77967cb1"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.422385 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wcs7x" event={"ID":"9a73fb24-fb9d-4037-b540-fcadcd423024","Type":"ContainerStarted","Data":"fcebc9351c82577e5d6deb3e9a24cb40a2551b43806907767fc1b1cee815029b"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.434304 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.434380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwpxl\" (UniqueName: \"kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.434438 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.434591 4856 generic.go:334] "Generic (PLEG): container finished" podID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerID="a91269e93928ae670344e835a674cee9dda7bf30b39ffff27cbd8820993acd64" exitCode=0 Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.434697 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerDied","Data":"a91269e93928ae670344e835a674cee9dda7bf30b39ffff27cbd8820993acd64"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.454111 4856 generic.go:334] "Generic (PLEG): container finished" podID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerID="7623c1bf7880458259e64305d327d4efcac7aad6029b9a9b3b1a19a502cf5312" exitCode=0 Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.454207 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" event={"ID":"3884e6c0-b02a-48d4-8796-b118f45a4990","Type":"ContainerDied","Data":"7623c1bf7880458259e64305d327d4efcac7aad6029b9a9b3b1a19a502cf5312"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.459843 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerID="8e1651408309e3855a3b582292e54bdeb89242d30296185312d38b6036926b06" exitCode=0 Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.461649 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" event={"ID":"f5eee554-aa95-4e25-8bf7-852b84febadf","Type":"ContainerDied","Data":"8e1651408309e3855a3b582292e54bdeb89242d30296185312d38b6036926b06"} Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.482879 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wcs7x" podStartSLOduration=3.482852766 podStartE2EDuration="3.482852766s" podCreationTimestamp="2025-12-03 09:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:28:53.468689369 +0000 UTC m=+1001.651581680" watchObservedRunningTime="2025-12-03 09:28:53.482852766 +0000 UTC m=+1001.665745067" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.536055 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.538377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwpxl\" (UniqueName: \"kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.538472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.539451 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.540821 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.581373 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwpxl\" (UniqueName: \"kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl\") pod \"redhat-marketplace-78d2s\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:53 crc kubenswrapper[4856]: I1203 09:28:53.665364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:28:54 crc kubenswrapper[4856]: I1203 09:28:54.354904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:28:54 crc kubenswrapper[4856]: E1203 09:28:54.355227 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:28:54 crc kubenswrapper[4856]: E1203 09:28:54.355279 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:28:54 crc kubenswrapper[4856]: E1203 09:28:54.355372 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:29:02.355344909 +0000 UTC m=+1010.538237200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.553978 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.820217 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.873721 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.889460 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config\") pod \"3884e6c0-b02a-48d4-8796-b118f45a4990\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.893752 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrbm4\" (UniqueName: \"kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4\") pod \"3884e6c0-b02a-48d4-8796-b118f45a4990\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.894148 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc\") pod \"3884e6c0-b02a-48d4-8796-b118f45a4990\" (UID: \"3884e6c0-b02a-48d4-8796-b118f45a4990\") " Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.901720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4" (OuterVolumeSpecName: "kube-api-access-mrbm4") pod "3884e6c0-b02a-48d4-8796-b118f45a4990" (UID: "3884e6c0-b02a-48d4-8796-b118f45a4990"). InnerVolumeSpecName "kube-api-access-mrbm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.998736 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content\") pod \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.998898 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities\") pod \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " Dec 03 09:28:55 crc kubenswrapper[4856]: I1203 09:28:55.999070 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmjt\" (UniqueName: \"kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt\") pod \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\" (UID: \"94f7ba13-4be7-4bfc-a5b7-acd860994e94\") " Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:55.999751 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrbm4\" (UniqueName: \"kubernetes.io/projected/3884e6c0-b02a-48d4-8796-b118f45a4990-kube-api-access-mrbm4\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.000678 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities" (OuterVolumeSpecName: "utilities") pod "94f7ba13-4be7-4bfc-a5b7-acd860994e94" (UID: "94f7ba13-4be7-4bfc-a5b7-acd860994e94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.007117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt" (OuterVolumeSpecName: "kube-api-access-fpmjt") pod "94f7ba13-4be7-4bfc-a5b7-acd860994e94" (UID: "94f7ba13-4be7-4bfc-a5b7-acd860994e94"). InnerVolumeSpecName "kube-api-access-fpmjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.011201 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config" (OuterVolumeSpecName: "config") pod "3884e6c0-b02a-48d4-8796-b118f45a4990" (UID: "3884e6c0-b02a-48d4-8796-b118f45a4990"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.016085 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3884e6c0-b02a-48d4-8796-b118f45a4990" (UID: "3884e6c0-b02a-48d4-8796-b118f45a4990"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.082324 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94f7ba13-4be7-4bfc-a5b7-acd860994e94" (UID: "94f7ba13-4be7-4bfc-a5b7-acd860994e94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.104167 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmjt\" (UniqueName: \"kubernetes.io/projected/94f7ba13-4be7-4bfc-a5b7-acd860994e94-kube-api-access-fpmjt\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.104214 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.104232 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.104243 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f7ba13-4be7-4bfc-a5b7-acd860994e94-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.104256 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3884e6c0-b02a-48d4-8796-b118f45a4990-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.146479 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:28:56 crc kubenswrapper[4856]: W1203 09:28:56.154671 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4614b22_bca8_4306_bd6a_26d99b904420.slice/crio-795ae7af513846c16b049a7ed0d153dd6f3466202b1e06e0d2a372907e7ca331 WatchSource:0}: Error finding container 795ae7af513846c16b049a7ed0d153dd6f3466202b1e06e0d2a372907e7ca331: Status 404 returned error can't find the container with id 795ae7af513846c16b049a7ed0d153dd6f3466202b1e06e0d2a372907e7ca331 Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.495881 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" event={"ID":"f5eee554-aa95-4e25-8bf7-852b84febadf","Type":"ContainerStarted","Data":"3937d37b5f889493e9a82b14b119e62e261c22b6c729446c87b19ed105ca230a"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.496503 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.503300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" event={"ID":"996c5c67-0f32-4d87-b31b-35e36f3d7b67","Type":"ContainerStarted","Data":"2662eb87672045bc0806594455099673e165c6f40f3019a3806f18d45b8366fa"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.503490 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.506638 4856 generic.go:334] "Generic (PLEG): container finished" podID="a4614b22-bca8-4306-bd6a-26d99b904420" containerID="7df83b88eff09c832c7f308416cdd5ac41c3b473504a3d4c7ad15b9ba0f0c7d4" exitCode=0 Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.506703 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerDied","Data":"7df83b88eff09c832c7f308416cdd5ac41c3b473504a3d4c7ad15b9ba0f0c7d4"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.506730 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerStarted","Data":"795ae7af513846c16b049a7ed0d153dd6f3466202b1e06e0d2a372907e7ca331"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.510738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9109180-5009-4b2b-b2ff-b56e90bf72aa","Type":"ContainerStarted","Data":"d671998ddc3fe0d295231db4af37ec393944a802174736f2638401f540602e3d"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.514738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgqs6" event={"ID":"94f7ba13-4be7-4bfc-a5b7-acd860994e94","Type":"ContainerDied","Data":"d599f2ba4c46bff038ae016e91d92cab58cf8a1535f26df9767ac0a10dff5574"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.514823 4856 scope.go:117] "RemoveContainer" containerID="a91269e93928ae670344e835a674cee9dda7bf30b39ffff27cbd8820993acd64" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.514964 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgqs6" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.529232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" event={"ID":"3884e6c0-b02a-48d4-8796-b118f45a4990","Type":"ContainerDied","Data":"76ba6d9b1329a25751e5c40d84109c7d1a8761c8c5d4e67a6920b0aa7c99bba3"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.529386 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.535322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-frh7v" event={"ID":"320b56b0-4905-4a31-bc37-13106b993909","Type":"ContainerStarted","Data":"387f5e5816783923d1ef4a7bfd41907c9ac3debac4db8c53b19b3772588c13cf"} Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.549409 4856 scope.go:117] "RemoveContainer" containerID="41e29efcf7a3d937c48b1858de01bfbc09afc1e848befa5ee21a318f69b60c8c" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.561965 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" podStartSLOduration=6.561926581 podStartE2EDuration="6.561926581s" podCreationTimestamp="2025-12-03 09:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:28:56.521503074 +0000 UTC m=+1004.704395385" watchObservedRunningTime="2025-12-03 09:28:56.561926581 +0000 UTC m=+1004.744818872" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.563561 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podStartSLOduration=5.563550262 podStartE2EDuration="5.563550262s" podCreationTimestamp="2025-12-03 09:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:28:56.551871068 +0000 UTC m=+1004.734763369" watchObservedRunningTime="2025-12-03 09:28:56.563550262 +0000 UTC m=+1004.746442563" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.581311 4856 scope.go:117] "RemoveContainer" containerID="0f01583809383180b43b60f603612ef9cdf61660f668578490541b7c2346d6eb" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.611073 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.629045 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.631766 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nnvvc"] Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.634639 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-frh7v" podStartSLOduration=3.6566236 podStartE2EDuration="10.63460784s" podCreationTimestamp="2025-12-03 09:28:46 +0000 UTC" firstStartedPulling="2025-12-03 09:28:48.69129156 +0000 UTC m=+996.874183861" lastFinishedPulling="2025-12-03 09:28:55.6692758 +0000 UTC m=+1003.852168101" observedRunningTime="2025-12-03 09:28:56.607916548 +0000 UTC m=+1004.790808849" watchObservedRunningTime="2025-12-03 09:28:56.63460784 +0000 UTC m=+1004.817500141" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.658893 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.666226 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kgqs6"] Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.694117 4856 scope.go:117] "RemoveContainer" containerID="7623c1bf7880458259e64305d327d4efcac7aad6029b9a9b3b1a19a502cf5312" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.703113 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" path="/var/lib/kubelet/pods/3884e6c0-b02a-48d4-8796-b118f45a4990/volumes" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.704042 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" path="/var/lib/kubelet/pods/94f7ba13-4be7-4bfc-a5b7-acd860994e94/volumes" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.740183 4856 scope.go:117] "RemoveContainer" containerID="100b52a7cbf25616183859d2b066d2249b684ffd3828b7d7a855aa9c63d33c43" Dec 03 09:28:56 crc kubenswrapper[4856]: I1203 09:28:56.752230 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 03 09:28:59 crc kubenswrapper[4856]: I1203 09:28:59.318241 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 03 09:28:59 crc kubenswrapper[4856]: I1203 09:28:59.417585 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="643c316d-09f1-4aee-8d49-34989baaa50e" containerName="galera" probeResult="failure" output=< Dec 03 09:28:59 crc kubenswrapper[4856]: wsrep_local_state_comment (Joined) differs from Synced Dec 03 09:28:59 crc kubenswrapper[4856]: > Dec 03 09:29:00 crc kubenswrapper[4856]: I1203 09:29:00.694977 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-nnvvc" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: i/o timeout" Dec 03 09:29:00 crc kubenswrapper[4856]: I1203 09:29:00.822097 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.132011 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450103 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:01 crc kubenswrapper[4856]: E1203 09:29:01.450457 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="extract-utilities" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450469 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="extract-utilities" Dec 03 09:29:01 crc kubenswrapper[4856]: E1203 09:29:01.450482 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="dnsmasq-dns" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450488 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="dnsmasq-dns" Dec 03 09:29:01 crc kubenswrapper[4856]: E1203 09:29:01.450500 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="extract-content" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450508 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="extract-content" Dec 03 09:29:01 crc kubenswrapper[4856]: E1203 09:29:01.450522 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="init" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450527 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="init" Dec 03 09:29:01 crc kubenswrapper[4856]: E1203 09:29:01.450544 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="registry-server" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450550 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="registry-server" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450709 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f7ba13-4be7-4bfc-a5b7-acd860994e94" containerName="registry-server" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.450727 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3884e6c0-b02a-48d4-8796-b118f45a4990" containerName="dnsmasq-dns" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.452059 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.472718 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.597014 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.653799 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.654030 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="dnsmasq-dns" containerID="cri-o://3937d37b5f889493e9a82b14b119e62e261c22b6c729446c87b19ed105ca230a" gracePeriod=10 Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.663995 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.664598 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltk8\" (UniqueName: \"kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.664701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.768659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.768832 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltk8\" (UniqueName: \"kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.768990 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.772375 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.773213 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.832550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltk8\" (UniqueName: \"kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8\") pod \"redhat-operators-kjt7g\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.832847 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9q8bs"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.833963 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.859865 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9q8bs"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.870984 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4lc\" (UniqueName: \"kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.871309 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.919536 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3de4-account-create-update-2hqfj"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.920734 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.926507 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.940993 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3de4-account-create-update-2hqfj"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.967940 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9jxlg"] Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.969303 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973407 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd4lc\" (UniqueName: \"kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973522 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcz7\" (UniqueName: \"kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973749 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.973968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cfs\" (UniqueName: \"kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.975239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:01 crc kubenswrapper[4856]: I1203 09:29:01.975340 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9jxlg"] Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.029775 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd4lc\" (UniqueName: \"kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc\") pod \"keystone-db-create-9q8bs\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.077159 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkcz7\" (UniqueName: \"kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.079510 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.079655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.079819 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cfs\" (UniqueName: \"kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.080926 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.084981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.093498 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.148733 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e2ae-account-create-update-jjs2q"] Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.150746 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.158706 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.161505 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkcz7\" (UniqueName: \"kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7\") pod \"keystone-3de4-account-create-update-2hqfj\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.166639 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cfs\" (UniqueName: \"kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs\") pod \"placement-db-create-9jxlg\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.186482 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e2ae-account-create-update-jjs2q"] Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.187051 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.188130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbphq\" (UniqueName: \"kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.257646 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.324250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.324485 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbphq\" (UniqueName: \"kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.327086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.327584 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.353389 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbphq\" (UniqueName: \"kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq\") pod \"placement-e2ae-account-create-update-jjs2q\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.376238 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.427139 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:29:02 crc kubenswrapper[4856]: E1203 09:29:02.427369 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 09:29:02 crc kubenswrapper[4856]: E1203 09:29:02.427709 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 09:29:02 crc kubenswrapper[4856]: E1203 09:29:02.427773 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift podName:9a29fb43-ed6d-499a-a4f7-b847de3dbf71 nodeName:}" failed. No retries permitted until 2025-12-03 09:29:18.427752286 +0000 UTC m=+1026.610644587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift") pod "swift-storage-0" (UID: "9a29fb43-ed6d-499a-a4f7-b847de3dbf71") : configmap "swift-ring-files" not found Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.495537 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.634195 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerID="3937d37b5f889493e9a82b14b119e62e261c22b6c729446c87b19ed105ca230a" exitCode=0 Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.634316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" event={"ID":"f5eee554-aa95-4e25-8bf7-852b84febadf","Type":"ContainerDied","Data":"3937d37b5f889493e9a82b14b119e62e261c22b6c729446c87b19ed105ca230a"} Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.647390 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerStarted","Data":"1a3257f7818488f4dd906d3634fedbd641522566e3dc627ac3e57f457cee20bb"} Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.728124 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:02 crc kubenswrapper[4856]: W1203 09:29:02.752040 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0bd46f_8cf2_40b2_8d10_8d2d3d064af2.slice/crio-298610acf8ffa5badbfd2f60089e489282ed618997946a87bcf15dbeb646fae9 WatchSource:0}: Error finding container 298610acf8ffa5badbfd2f60089e489282ed618997946a87bcf15dbeb646fae9: Status 404 returned error can't find the container with id 298610acf8ffa5badbfd2f60089e489282ed618997946a87bcf15dbeb646fae9 Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.870036 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ztwpp"] Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.871105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.938324 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ztwpp"] Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.940004 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7zt\" (UniqueName: \"kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.940084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:02 crc kubenswrapper[4856]: I1203 09:29:02.977679 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9q8bs"] Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.037000 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-09dd-account-create-update-qzcxv"] Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.042017 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7zt\" (UniqueName: \"kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.043203 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.044876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.048501 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.083119 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.089910 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-09dd-account-create-update-qzcxv"] Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.104498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7zt\" (UniqueName: \"kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt\") pod \"glance-db-create-ztwpp\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.138156 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3de4-account-create-update-2hqfj"] Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.247628 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhvt\" (UniqueName: \"kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.247717 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.258672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.303397 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9jxlg"] Dec 03 09:29:03 crc kubenswrapper[4856]: W1203 09:29:03.321855 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee726f5c_0148_49df_80ff_5e02896db9b5.slice/crio-4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8 WatchSource:0}: Error finding container 4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8: Status 404 returned error can't find the container with id 4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8 Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.351856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhvt\" (UniqueName: \"kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.351977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.353089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.402256 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhvt\" (UniqueName: \"kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt\") pod \"glance-09dd-account-create-update-qzcxv\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.477129 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e2ae-account-create-update-jjs2q"] Dec 03 09:29:03 crc kubenswrapper[4856]: W1203 09:29:03.482879 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod959b10d9_2ea2_41e5_ab5a_e69fb45ba078.slice/crio-69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6 WatchSource:0}: Error finding container 69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6: Status 404 returned error can't find the container with id 69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6 Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.608658 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.676260 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.726657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3de4-account-create-update-2hqfj" event={"ID":"4bdfeb32-52bc-47c4-b845-b4a9602fc64a","Type":"ContainerStarted","Data":"b91eb48b6b2fd77aee0313c7605c88fd1352df4d19240565727ebae9e14e697e"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.737528 4856 generic.go:334] "Generic (PLEG): container finished" podID="a4614b22-bca8-4306-bd6a-26d99b904420" containerID="1a3257f7818488f4dd906d3634fedbd641522566e3dc627ac3e57f457cee20bb" exitCode=0 Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.737603 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerDied","Data":"1a3257f7818488f4dd906d3634fedbd641522566e3dc627ac3e57f457cee20bb"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.741266 4856 generic.go:334] "Generic (PLEG): container finished" podID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerID="1a6d14cfaf78aa2cc43a57f4498d978dd013b7f8f88d1860a085e690bdbc342e" exitCode=0 Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.741329 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerDied","Data":"1a6d14cfaf78aa2cc43a57f4498d978dd013b7f8f88d1860a085e690bdbc342e"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.741356 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerStarted","Data":"298610acf8ffa5badbfd2f60089e489282ed618997946a87bcf15dbeb646fae9"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.748079 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2ae-account-create-update-jjs2q" event={"ID":"959b10d9-2ea2-41e5-ab5a-e69fb45ba078","Type":"ContainerStarted","Data":"69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.753151 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9109180-5009-4b2b-b2ff-b56e90bf72aa","Type":"ContainerStarted","Data":"b92e9b3510fdcea36666652ed72507733026fcdc54884c6a1f7988b39bfdfbc3"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.753230 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b9109180-5009-4b2b-b2ff-b56e90bf72aa","Type":"ContainerStarted","Data":"d18656130289c7714c0f65411dd9005e1ecdf6cf271e2ddaeae9c1327b8c5ceb"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.753531 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.756325 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9q8bs" event={"ID":"2cfde763-a165-46e4-82e3-c49f636a6486","Type":"ContainerStarted","Data":"33876cc33dbfef68684e1e30c058df554b020fc18cf34bce65ac749ba1c1ec34"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.756401 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9q8bs" event={"ID":"2cfde763-a165-46e4-82e3-c49f636a6486","Type":"ContainerStarted","Data":"d85db8c46b03bb66e5bc8d39cb99d0923e4b7a251f1867711c44f474c8091b85"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.759312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jxlg" event={"ID":"ee726f5c-0148-49df-80ff-5e02896db9b5","Type":"ContainerStarted","Data":"4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.781173 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" event={"ID":"f5eee554-aa95-4e25-8bf7-852b84febadf","Type":"ContainerDied","Data":"4c70f22246e9bb52a1b5834268776abecc06604543e19dec2a0c1f3b021d405b"} Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.781682 4856 scope.go:117] "RemoveContainer" containerID="3937d37b5f889493e9a82b14b119e62e261c22b6c729446c87b19ed105ca230a" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.781874 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-mpgsg" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.782590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb\") pod \"f5eee554-aa95-4e25-8bf7-852b84febadf\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.782755 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc\") pod \"f5eee554-aa95-4e25-8bf7-852b84febadf\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.782827 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s44f\" (UniqueName: \"kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f\") pod \"f5eee554-aa95-4e25-8bf7-852b84febadf\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.782860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") pod \"f5eee554-aa95-4e25-8bf7-852b84febadf\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.830249 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f" (OuterVolumeSpecName: "kube-api-access-5s44f") pod "f5eee554-aa95-4e25-8bf7-852b84febadf" (UID: "f5eee554-aa95-4e25-8bf7-852b84febadf"). InnerVolumeSpecName "kube-api-access-5s44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.837820 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9q8bs" podStartSLOduration=2.837772976 podStartE2EDuration="2.837772976s" podCreationTimestamp="2025-12-03 09:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:03.817468415 +0000 UTC m=+1012.000360716" watchObservedRunningTime="2025-12-03 09:29:03.837772976 +0000 UTC m=+1012.020665277" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.847945 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.040077432 podStartE2EDuration="12.847912921s" podCreationTimestamp="2025-12-03 09:28:51 +0000 UTC" firstStartedPulling="2025-12-03 09:28:55.642782833 +0000 UTC m=+1003.825675134" lastFinishedPulling="2025-12-03 09:29:02.450618322 +0000 UTC m=+1010.633510623" observedRunningTime="2025-12-03 09:29:03.843782517 +0000 UTC m=+1012.026674818" watchObservedRunningTime="2025-12-03 09:29:03.847912921 +0000 UTC m=+1012.030805222" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.885709 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s44f\" (UniqueName: \"kubernetes.io/projected/f5eee554-aa95-4e25-8bf7-852b84febadf-kube-api-access-5s44f\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:03 crc kubenswrapper[4856]: I1203 09:29:03.991011 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5eee554-aa95-4e25-8bf7-852b84febadf" (UID: "f5eee554-aa95-4e25-8bf7-852b84febadf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:04 crc kubenswrapper[4856]: E1203 09:29:04.052342 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config podName:f5eee554-aa95-4e25-8bf7-852b84febadf nodeName:}" failed. No retries permitted until 2025-12-03 09:29:04.552300914 +0000 UTC m=+1012.735193215 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config") pod "f5eee554-aa95-4e25-8bf7-852b84febadf" (UID: "f5eee554-aa95-4e25-8bf7-852b84febadf") : error deleting /var/lib/kubelet/pods/f5eee554-aa95-4e25-8bf7-852b84febadf/volume-subpaths: remove /var/lib/kubelet/pods/f5eee554-aa95-4e25-8bf7-852b84febadf/volume-subpaths: no such file or directory Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.052633 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5eee554-aa95-4e25-8bf7-852b84febadf" (UID: "f5eee554-aa95-4e25-8bf7-852b84febadf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.090699 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.090737 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.101728 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ztwpp"] Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.314190 4856 scope.go:117] "RemoveContainer" containerID="8e1651408309e3855a3b582292e54bdeb89242d30296185312d38b6036926b06" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.389461 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-09dd-account-create-update-qzcxv"] Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.616712 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") pod \"f5eee554-aa95-4e25-8bf7-852b84febadf\" (UID: \"f5eee554-aa95-4e25-8bf7-852b84febadf\") " Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.617701 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config" (OuterVolumeSpecName: "config") pod "f5eee554-aa95-4e25-8bf7-852b84febadf" (UID: "f5eee554-aa95-4e25-8bf7-852b84febadf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.719197 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5eee554-aa95-4e25-8bf7-852b84febadf-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.791733 4856 generic.go:334] "Generic (PLEG): container finished" podID="4bdfeb32-52bc-47c4-b845-b4a9602fc64a" containerID="bd7a384a97c177909e0bac225748dc28952cac5c7fa4083759eecc9a1d3c0b83" exitCode=0 Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.791788 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3de4-account-create-update-2hqfj" event={"ID":"4bdfeb32-52bc-47c4-b845-b4a9602fc64a","Type":"ContainerDied","Data":"bd7a384a97c177909e0bac225748dc28952cac5c7fa4083759eecc9a1d3c0b83"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.794403 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerStarted","Data":"4005a0619accda4619e80c17070b353128b66bafa2d90dd711eebcec0cb02a30"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.796277 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-09dd-account-create-update-qzcxv" event={"ID":"03d88e41-24e3-42e5-915b-235ae9b3515a","Type":"ContainerStarted","Data":"a483ed767bd74b1c8dc61ff08310a3907777b2b716d01f6cc5e1fdb801f45a27"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.797284 4856 generic.go:334] "Generic (PLEG): container finished" podID="959b10d9-2ea2-41e5-ab5a-e69fb45ba078" containerID="f59818e3a552a061c82fc8ec0617a63d171c54703b72ad3e7958c61fa7076684" exitCode=0 Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.797327 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2ae-account-create-update-jjs2q" event={"ID":"959b10d9-2ea2-41e5-ab5a-e69fb45ba078","Type":"ContainerDied","Data":"f59818e3a552a061c82fc8ec0617a63d171c54703b72ad3e7958c61fa7076684"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.801223 4856 generic.go:334] "Generic (PLEG): container finished" podID="6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" containerID="dea1e3a303ba7056d0aa989b5368aeb40fa4a40d4ec5d13ef0226b5821b99c60" exitCode=0 Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.801284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ztwpp" event={"ID":"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2","Type":"ContainerDied","Data":"dea1e3a303ba7056d0aa989b5368aeb40fa4a40d4ec5d13ef0226b5821b99c60"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.801311 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ztwpp" event={"ID":"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2","Type":"ContainerStarted","Data":"985301d358a932f2c1cc0e5bf860cf86c67f9ac911e61f01fae1202ac31aa70b"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.804485 4856 generic.go:334] "Generic (PLEG): container finished" podID="2cfde763-a165-46e4-82e3-c49f636a6486" containerID="33876cc33dbfef68684e1e30c058df554b020fc18cf34bce65ac749ba1c1ec34" exitCode=0 Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.804525 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9q8bs" event={"ID":"2cfde763-a165-46e4-82e3-c49f636a6486","Type":"ContainerDied","Data":"33876cc33dbfef68684e1e30c058df554b020fc18cf34bce65ac749ba1c1ec34"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.806041 4856 generic.go:334] "Generic (PLEG): container finished" podID="ee726f5c-0148-49df-80ff-5e02896db9b5" containerID="d06d4fbda9c460bc95d79565168ad295eaf3ac130a340bb3aa654065b5218f92" exitCode=0 Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.806868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jxlg" event={"ID":"ee726f5c-0148-49df-80ff-5e02896db9b5","Type":"ContainerDied","Data":"d06d4fbda9c460bc95d79565168ad295eaf3ac130a340bb3aa654065b5218f92"} Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.869275 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78d2s" podStartSLOduration=5.154601607 podStartE2EDuration="11.86925646s" podCreationTimestamp="2025-12-03 09:28:53 +0000 UTC" firstStartedPulling="2025-12-03 09:28:56.509445811 +0000 UTC m=+1004.692338112" lastFinishedPulling="2025-12-03 09:29:03.224100664 +0000 UTC m=+1011.406992965" observedRunningTime="2025-12-03 09:29:04.868776348 +0000 UTC m=+1013.051668649" watchObservedRunningTime="2025-12-03 09:29:04.86925646 +0000 UTC m=+1013.052148761" Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.933501 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:29:04 crc kubenswrapper[4856]: I1203 09:29:04.941763 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-mpgsg"] Dec 03 09:29:05 crc kubenswrapper[4856]: I1203 09:29:05.820089 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerStarted","Data":"98e1f0432cc88f926f60d4b8daecf0536f71858c04b027b4693698d5337d1a6a"} Dec 03 09:29:05 crc kubenswrapper[4856]: I1203 09:29:05.853076 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-09dd-account-create-update-qzcxv" event={"ID":"03d88e41-24e3-42e5-915b-235ae9b3515a","Type":"ContainerStarted","Data":"6aa7c61cd434bd3cbf075b0a3ad31700da174e60ed3f533130ddab4b75599385"} Dec 03 09:29:05 crc kubenswrapper[4856]: I1203 09:29:05.887363 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-09dd-account-create-update-qzcxv" podStartSLOduration=3.887328096 podStartE2EDuration="3.887328096s" podCreationTimestamp="2025-12-03 09:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:05.880534325 +0000 UTC m=+1014.063426646" watchObservedRunningTime="2025-12-03 09:29:05.887328096 +0000 UTC m=+1014.070220397" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.700939 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" path="/var/lib/kubelet/pods/f5eee554-aa95-4e25-8bf7-852b84febadf/volumes" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.867045 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2ae-account-create-update-jjs2q" event={"ID":"959b10d9-2ea2-41e5-ab5a-e69fb45ba078","Type":"ContainerDied","Data":"69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6"} Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.867102 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69ce7c2eacefd36d7896389ae036fa6331ec8801e67d478bdb0a6cea3b29faa6" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.869823 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ztwpp" event={"ID":"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2","Type":"ContainerDied","Data":"985301d358a932f2c1cc0e5bf860cf86c67f9ac911e61f01fae1202ac31aa70b"} Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.869886 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985301d358a932f2c1cc0e5bf860cf86c67f9ac911e61f01fae1202ac31aa70b" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.872411 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9q8bs" event={"ID":"2cfde763-a165-46e4-82e3-c49f636a6486","Type":"ContainerDied","Data":"d85db8c46b03bb66e5bc8d39cb99d0923e4b7a251f1867711c44f474c8091b85"} Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.872517 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85db8c46b03bb66e5bc8d39cb99d0923e4b7a251f1867711c44f474c8091b85" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.874694 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jxlg" event={"ID":"ee726f5c-0148-49df-80ff-5e02896db9b5","Type":"ContainerDied","Data":"4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8"} Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.874751 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1edb7bef130073cc6d912dfde91b2388476937853337d7d39166e9b4537ad8" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.893609 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3de4-account-create-update-2hqfj" event={"ID":"4bdfeb32-52bc-47c4-b845-b4a9602fc64a","Type":"ContainerDied","Data":"b91eb48b6b2fd77aee0313c7605c88fd1352df4d19240565727ebae9e14e697e"} Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.893697 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b91eb48b6b2fd77aee0313c7605c88fd1352df4d19240565727ebae9e14e697e" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.948585 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.963619 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.978206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts\") pod \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.978454 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd4lc\" (UniqueName: \"kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc\") pod \"2cfde763-a165-46e4-82e3-c49f636a6486\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.978500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts\") pod \"2cfde763-a165-46e4-82e3-c49f636a6486\" (UID: \"2cfde763-a165-46e4-82e3-c49f636a6486\") " Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.978535 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkcz7\" (UniqueName: \"kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7\") pod \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\" (UID: \"4bdfeb32-52bc-47c4-b845-b4a9602fc64a\") " Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.979061 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bdfeb32-52bc-47c4-b845-b4a9602fc64a" (UID: "4bdfeb32-52bc-47c4-b845-b4a9602fc64a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.979111 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cfde763-a165-46e4-82e3-c49f636a6486" (UID: "2cfde763-a165-46e4-82e3-c49f636a6486"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.987145 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7" (OuterVolumeSpecName: "kube-api-access-fkcz7") pod "4bdfeb32-52bc-47c4-b845-b4a9602fc64a" (UID: "4bdfeb32-52bc-47c4-b845-b4a9602fc64a"). InnerVolumeSpecName "kube-api-access-fkcz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.990319 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:06 crc kubenswrapper[4856]: I1203 09:29:06.991860 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc" (OuterVolumeSpecName: "kube-api-access-xd4lc") pod "2cfde763-a165-46e4-82e3-c49f636a6486" (UID: "2cfde763-a165-46e4-82e3-c49f636a6486"). InnerVolumeSpecName "kube-api-access-xd4lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.068094 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.079780 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cfs\" (UniqueName: \"kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs\") pod \"ee726f5c-0148-49df-80ff-5e02896db9b5\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080005 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbphq\" (UniqueName: \"kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq\") pod \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts\") pod \"ee726f5c-0148-49df-80ff-5e02896db9b5\" (UID: \"ee726f5c-0148-49df-80ff-5e02896db9b5\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080312 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts\") pod \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\" (UID: \"959b10d9-2ea2-41e5-ab5a-e69fb45ba078\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080767 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cfde763-a165-46e4-82e3-c49f636a6486-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080788 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkcz7\" (UniqueName: \"kubernetes.io/projected/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-kube-api-access-fkcz7\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080818 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bdfeb32-52bc-47c4-b845-b4a9602fc64a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.080829 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd4lc\" (UniqueName: \"kubernetes.io/projected/2cfde763-a165-46e4-82e3-c49f636a6486-kube-api-access-xd4lc\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.081015 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "959b10d9-2ea2-41e5-ab5a-e69fb45ba078" (UID: "959b10d9-2ea2-41e5-ab5a-e69fb45ba078"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.081654 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.081990 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee726f5c-0148-49df-80ff-5e02896db9b5" (UID: "ee726f5c-0148-49df-80ff-5e02896db9b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.086693 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs" (OuterVolumeSpecName: "kube-api-access-q5cfs") pod "ee726f5c-0148-49df-80ff-5e02896db9b5" (UID: "ee726f5c-0148-49df-80ff-5e02896db9b5"). InnerVolumeSpecName "kube-api-access-q5cfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.086777 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq" (OuterVolumeSpecName: "kube-api-access-nbphq") pod "959b10d9-2ea2-41e5-ab5a-e69fb45ba078" (UID: "959b10d9-2ea2-41e5-ab5a-e69fb45ba078"). InnerVolumeSpecName "kube-api-access-nbphq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.182275 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh7zt\" (UniqueName: \"kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt\") pod \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.182388 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts\") pod \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\" (UID: \"6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2\") " Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.182818 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" (UID: "6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.183020 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.183046 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5cfs\" (UniqueName: \"kubernetes.io/projected/ee726f5c-0148-49df-80ff-5e02896db9b5-kube-api-access-q5cfs\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.183061 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbphq\" (UniqueName: \"kubernetes.io/projected/959b10d9-2ea2-41e5-ab5a-e69fb45ba078-kube-api-access-nbphq\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.183074 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee726f5c-0148-49df-80ff-5e02896db9b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.183083 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.185234 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt" (OuterVolumeSpecName: "kube-api-access-vh7zt") pod "6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" (UID: "6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2"). InnerVolumeSpecName "kube-api-access-vh7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.285095 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh7zt\" (UniqueName: \"kubernetes.io/projected/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2-kube-api-access-vh7zt\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.906128 4856 generic.go:334] "Generic (PLEG): container finished" podID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerID="98e1f0432cc88f926f60d4b8daecf0536f71858c04b027b4693698d5337d1a6a" exitCode=0 Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.906702 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3de4-account-create-update-2hqfj" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.906179 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerDied","Data":"98e1f0432cc88f926f60d4b8daecf0536f71858c04b027b4693698d5337d1a6a"} Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.907427 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9q8bs" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.907548 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2ae-account-create-update-jjs2q" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.907592 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ztwpp" Dec 03 09:29:07 crc kubenswrapper[4856]: I1203 09:29:07.907752 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jxlg" Dec 03 09:29:08 crc kubenswrapper[4856]: I1203 09:29:08.922600 4856 generic.go:334] "Generic (PLEG): container finished" podID="03d88e41-24e3-42e5-915b-235ae9b3515a" containerID="6aa7c61cd434bd3cbf075b0a3ad31700da174e60ed3f533130ddab4b75599385" exitCode=0 Dec 03 09:29:08 crc kubenswrapper[4856]: I1203 09:29:08.922708 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-09dd-account-create-update-qzcxv" event={"ID":"03d88e41-24e3-42e5-915b-235ae9b3515a","Type":"ContainerDied","Data":"6aa7c61cd434bd3cbf075b0a3ad31700da174e60ed3f533130ddab4b75599385"} Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.505894 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506421 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="dnsmasq-dns" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506447 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="dnsmasq-dns" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506478 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdfeb32-52bc-47c4-b845-b4a9602fc64a" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506490 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdfeb32-52bc-47c4-b845-b4a9602fc64a" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506508 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfde763-a165-46e4-82e3-c49f636a6486" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506516 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfde763-a165-46e4-82e3-c49f636a6486" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506535 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959b10d9-2ea2-41e5-ab5a-e69fb45ba078" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506543 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="959b10d9-2ea2-41e5-ab5a-e69fb45ba078" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506562 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506571 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506588 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee726f5c-0148-49df-80ff-5e02896db9b5" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506595 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee726f5c-0148-49df-80ff-5e02896db9b5" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: E1203 09:29:09.506614 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="init" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506622 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="init" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506832 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfde763-a165-46e4-82e3-c49f636a6486" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506857 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="959b10d9-2ea2-41e5-ab5a-e69fb45ba078" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506892 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee726f5c-0148-49df-80ff-5e02896db9b5" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506905 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bdfeb32-52bc-47c4-b845-b4a9602fc64a" containerName="mariadb-account-create-update" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506913 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" containerName="mariadb-database-create" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.506931 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5eee554-aa95-4e25-8bf7-852b84febadf" containerName="dnsmasq-dns" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.508766 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.527591 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.627706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.627879 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.627979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhdz\" (UniqueName: \"kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.730378 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.730508 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.730590 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhdz\" (UniqueName: \"kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.731112 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.731508 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.757969 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhdz\" (UniqueName: \"kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz\") pod \"certified-operators-crnh4\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.830402 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.937051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerStarted","Data":"db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49"} Dec 03 09:29:09 crc kubenswrapper[4856]: I1203 09:29:09.969349 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjt7g" podStartSLOduration=3.984724934 podStartE2EDuration="8.969319947s" podCreationTimestamp="2025-12-03 09:29:01 +0000 UTC" firstStartedPulling="2025-12-03 09:29:03.749697589 +0000 UTC m=+1011.932589890" lastFinishedPulling="2025-12-03 09:29:08.734292602 +0000 UTC m=+1016.917184903" observedRunningTime="2025-12-03 09:29:09.963259345 +0000 UTC m=+1018.146151646" watchObservedRunningTime="2025-12-03 09:29:09.969319947 +0000 UTC m=+1018.152212248" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.474999 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.561644 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts\") pod \"03d88e41-24e3-42e5-915b-235ae9b3515a\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.561820 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzhvt\" (UniqueName: \"kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt\") pod \"03d88e41-24e3-42e5-915b-235ae9b3515a\" (UID: \"03d88e41-24e3-42e5-915b-235ae9b3515a\") " Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.563172 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03d88e41-24e3-42e5-915b-235ae9b3515a" (UID: "03d88e41-24e3-42e5-915b-235ae9b3515a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.563360 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d88e41-24e3-42e5-915b-235ae9b3515a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.580901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.593163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt" (OuterVolumeSpecName: "kube-api-access-gzhvt") pod "03d88e41-24e3-42e5-915b-235ae9b3515a" (UID: "03d88e41-24e3-42e5-915b-235ae9b3515a"). InnerVolumeSpecName "kube-api-access-gzhvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.665381 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzhvt\" (UniqueName: \"kubernetes.io/projected/03d88e41-24e3-42e5-915b-235ae9b3515a-kube-api-access-gzhvt\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.949272 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-09dd-account-create-update-qzcxv" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.949294 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-09dd-account-create-update-qzcxv" event={"ID":"03d88e41-24e3-42e5-915b-235ae9b3515a","Type":"ContainerDied","Data":"a483ed767bd74b1c8dc61ff08310a3907777b2b716d01f6cc5e1fdb801f45a27"} Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.949358 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a483ed767bd74b1c8dc61ff08310a3907777b2b716d01f6cc5e1fdb801f45a27" Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.952148 4856 generic.go:334] "Generic (PLEG): container finished" podID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerID="462944fed5eac808e33a2e120d4f1d47b3230f419c21c98ad37b706c913ecf6d" exitCode=0 Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.952513 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerDied","Data":"462944fed5eac808e33a2e120d4f1d47b3230f419c21c98ad37b706c913ecf6d"} Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.952556 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerStarted","Data":"466a2582ba3320fb71a6d7b614e1084df85a0e3663bffca4b76efc809715ebf7"} Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.958139 4856 generic.go:334] "Generic (PLEG): container finished" podID="320b56b0-4905-4a31-bc37-13106b993909" containerID="387f5e5816783923d1ef4a7bfd41907c9ac3debac4db8c53b19b3772588c13cf" exitCode=0 Dec 03 09:29:10 crc kubenswrapper[4856]: I1203 09:29:10.958208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-frh7v" event={"ID":"320b56b0-4905-4a31-bc37-13106b993909","Type":"ContainerDied","Data":"387f5e5816783923d1ef4a7bfd41907c9ac3debac4db8c53b19b3772588c13cf"} Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.095273 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.095893 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.360000 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.423709 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522492 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522718 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522827 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522864 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522888 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.522966 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hdpj\" (UniqueName: \"kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj\") pod \"320b56b0-4905-4a31-bc37-13106b993909\" (UID: \"320b56b0-4905-4a31-bc37-13106b993909\") " Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.524166 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.524594 4856 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.525567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.535985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.536283 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj" (OuterVolumeSpecName: "kube-api-access-2hdpj") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "kube-api-access-2hdpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.557130 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.567768 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts" (OuterVolumeSpecName: "scripts") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.573600 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "320b56b0-4905-4a31-bc37-13106b993909" (UID: "320b56b0-4905-4a31-bc37-13106b993909"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627667 4856 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627740 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/320b56b0-4905-4a31-bc37-13106b993909-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627750 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/320b56b0-4905-4a31-bc37-13106b993909-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627759 4856 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627770 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hdpj\" (UniqueName: \"kubernetes.io/projected/320b56b0-4905-4a31-bc37-13106b993909-kube-api-access-2hdpj\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.627781 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/320b56b0-4905-4a31-bc37-13106b993909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.980436 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-frh7v" event={"ID":"320b56b0-4905-4a31-bc37-13106b993909","Type":"ContainerDied","Data":"3016783037aa8ee4a986db1bd663f71f7552848a4ad47530646aabcb687a6207"} Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.980788 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3016783037aa8ee4a986db1bd663f71f7552848a4ad47530646aabcb687a6207" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.980482 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-frh7v" Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.984166 4856 generic.go:334] "Generic (PLEG): container finished" podID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerID="d5541cbfc308cfb952d707ecd63fb29c334c3cd3e454abe242e563dba2c0de2c" exitCode=0 Dec 03 09:29:12 crc kubenswrapper[4856]: I1203 09:29:12.984206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerDied","Data":"d5541cbfc308cfb952d707ecd63fb29c334c3cd3e454abe242e563dba2c0de2c"} Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.124800 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7tm2h" podUID="da1b289d-32ea-4bbb-a203-d208e0267f9b" containerName="ovn-controller" probeResult="failure" output=< Dec 03 09:29:13 crc kubenswrapper[4856]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 09:29:13 crc kubenswrapper[4856]: > Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.151007 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.155161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g4lq4" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.166950 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjt7g" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" probeResult="failure" output=< Dec 03 09:29:13 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Dec 03 09:29:13 crc kubenswrapper[4856]: > Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.256288 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zpbhz"] Dec 03 09:29:13 crc kubenswrapper[4856]: E1203 09:29:13.256881 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d88e41-24e3-42e5-915b-235ae9b3515a" containerName="mariadb-account-create-update" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.256907 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d88e41-24e3-42e5-915b-235ae9b3515a" containerName="mariadb-account-create-update" Dec 03 09:29:13 crc kubenswrapper[4856]: E1203 09:29:13.256950 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320b56b0-4905-4a31-bc37-13106b993909" containerName="swift-ring-rebalance" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.256960 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="320b56b0-4905-4a31-bc37-13106b993909" containerName="swift-ring-rebalance" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.257202 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="320b56b0-4905-4a31-bc37-13106b993909" containerName="swift-ring-rebalance" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.257246 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d88e41-24e3-42e5-915b-235ae9b3515a" containerName="mariadb-account-create-update" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.258150 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.264882 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.264992 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-glqkm" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.273437 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zpbhz"] Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.484373 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.484445 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.484508 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vdvw\" (UniqueName: \"kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.484556 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.587842 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.587928 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.588005 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vdvw\" (UniqueName: \"kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.588066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.600957 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.600966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.602850 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.612454 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7tm2h-config-5cxhn"] Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.615450 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.625140 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.630167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vdvw\" (UniqueName: \"kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw\") pod \"glance-db-sync-zpbhz\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.636541 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h-config-5cxhn"] Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.672967 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.673063 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.694309 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.694562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.694678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz26z\" (UniqueName: \"kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.694842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.694983 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.695095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.742255 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797302 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797382 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797444 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797573 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.797637 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz26z\" (UniqueName: \"kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.798591 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.798664 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.798715 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.798947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.800977 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.819623 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zpbhz" Dec 03 09:29:13 crc kubenswrapper[4856]: I1203 09:29:13.824651 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz26z\" (UniqueName: \"kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z\") pod \"ovn-controller-7tm2h-config-5cxhn\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:14 crc kubenswrapper[4856]: I1203 09:29:14.005677 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:14 crc kubenswrapper[4856]: I1203 09:29:14.093929 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:14 crc kubenswrapper[4856]: I1203 09:29:14.410304 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zpbhz"] Dec 03 09:29:14 crc kubenswrapper[4856]: W1203 09:29:14.552064 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce431888_5f76_451c_a2b5_d8033c088ebd.slice/crio-d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255 WatchSource:0}: Error finding container d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255: Status 404 returned error can't find the container with id d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255 Dec 03 09:29:14 crc kubenswrapper[4856]: I1203 09:29:14.555834 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h-config-5cxhn"] Dec 03 09:29:15 crc kubenswrapper[4856]: I1203 09:29:15.024171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-5cxhn" event={"ID":"ce431888-5f76-451c-a2b5-d8033c088ebd","Type":"ContainerStarted","Data":"8013f351d8f14936b7f5fd5d3439c74ac5f9485b4ad9bb22f17b0dddc30bedfb"} Dec 03 09:29:15 crc kubenswrapper[4856]: I1203 09:29:15.024251 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-5cxhn" event={"ID":"ce431888-5f76-451c-a2b5-d8033c088ebd","Type":"ContainerStarted","Data":"d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255"} Dec 03 09:29:15 crc kubenswrapper[4856]: I1203 09:29:15.025858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zpbhz" event={"ID":"3a337aa7-570e-40fe-86ca-faee49a09165","Type":"ContainerStarted","Data":"052de2a5903b1f6a56d298523ecef1f2d8df1d5322ed78ad764d2f4baa23e561"} Dec 03 09:29:15 crc kubenswrapper[4856]: I1203 09:29:15.028275 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerStarted","Data":"6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802"} Dec 03 09:29:15 crc kubenswrapper[4856]: I1203 09:29:15.053405 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7tm2h-config-5cxhn" podStartSLOduration=2.053368262 podStartE2EDuration="2.053368262s" podCreationTimestamp="2025-12-03 09:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:15.046684504 +0000 UTC m=+1023.229576815" watchObservedRunningTime="2025-12-03 09:29:15.053368262 +0000 UTC m=+1023.236260563" Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.044508 4856 generic.go:334] "Generic (PLEG): container finished" podID="16e71e20-1329-46fc-b544-39febc69ae60" containerID="e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132" exitCode=0 Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.044733 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerDied","Data":"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132"} Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.050241 4856 generic.go:334] "Generic (PLEG): container finished" podID="ce431888-5f76-451c-a2b5-d8033c088ebd" containerID="8013f351d8f14936b7f5fd5d3439c74ac5f9485b4ad9bb22f17b0dddc30bedfb" exitCode=0 Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.050619 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-5cxhn" event={"ID":"ce431888-5f76-451c-a2b5-d8033c088ebd","Type":"ContainerDied","Data":"8013f351d8f14936b7f5fd5d3439c74ac5f9485b4ad9bb22f17b0dddc30bedfb"} Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.080073 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crnh4" podStartSLOduration=3.928632569 podStartE2EDuration="7.080039915s" podCreationTimestamp="2025-12-03 09:29:09 +0000 UTC" firstStartedPulling="2025-12-03 09:29:10.954061145 +0000 UTC m=+1019.136953446" lastFinishedPulling="2025-12-03 09:29:14.105468491 +0000 UTC m=+1022.288360792" observedRunningTime="2025-12-03 09:29:15.083249494 +0000 UTC m=+1023.266141785" watchObservedRunningTime="2025-12-03 09:29:16.080039915 +0000 UTC m=+1024.262932216" Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.110674 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:29:16 crc kubenswrapper[4856]: I1203 09:29:16.111029 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-78d2s" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="registry-server" containerID="cri-o://4005a0619accda4619e80c17070b353128b66bafa2d90dd711eebcec0cb02a30" gracePeriod=2 Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.065359 4856 generic.go:334] "Generic (PLEG): container finished" podID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerID="0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3" exitCode=0 Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.065482 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerDied","Data":"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3"} Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.069122 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerStarted","Data":"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118"} Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.070899 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.078326 4856 generic.go:334] "Generic (PLEG): container finished" podID="a4614b22-bca8-4306-bd6a-26d99b904420" containerID="4005a0619accda4619e80c17070b353128b66bafa2d90dd711eebcec0cb02a30" exitCode=0 Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.078638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerDied","Data":"4005a0619accda4619e80c17070b353128b66bafa2d90dd711eebcec0cb02a30"} Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.295518 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.301490751 podStartE2EDuration="1m20.295483138s" podCreationTimestamp="2025-12-03 09:27:57 +0000 UTC" firstStartedPulling="2025-12-03 09:27:59.65410431 +0000 UTC m=+947.836996611" lastFinishedPulling="2025-12-03 09:28:40.648096697 +0000 UTC m=+988.830988998" observedRunningTime="2025-12-03 09:29:17.275201838 +0000 UTC m=+1025.458094139" watchObservedRunningTime="2025-12-03 09:29:17.295483138 +0000 UTC m=+1025.478375449" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.383494 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.493951 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities\") pod \"a4614b22-bca8-4306-bd6a-26d99b904420\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.494002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwpxl\" (UniqueName: \"kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl\") pod \"a4614b22-bca8-4306-bd6a-26d99b904420\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.494037 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content\") pod \"a4614b22-bca8-4306-bd6a-26d99b904420\" (UID: \"a4614b22-bca8-4306-bd6a-26d99b904420\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.495633 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities" (OuterVolumeSpecName: "utilities") pod "a4614b22-bca8-4306-bd6a-26d99b904420" (UID: "a4614b22-bca8-4306-bd6a-26d99b904420"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.506667 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl" (OuterVolumeSpecName: "kube-api-access-kwpxl") pod "a4614b22-bca8-4306-bd6a-26d99b904420" (UID: "a4614b22-bca8-4306-bd6a-26d99b904420"). InnerVolumeSpecName "kube-api-access-kwpxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.522318 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4614b22-bca8-4306-bd6a-26d99b904420" (UID: "a4614b22-bca8-4306-bd6a-26d99b904420"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.596792 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.596849 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4614b22-bca8-4306-bd6a-26d99b904420-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.596859 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwpxl\" (UniqueName: \"kubernetes.io/projected/a4614b22-bca8-4306-bd6a-26d99b904420-kube-api-access-kwpxl\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.628965 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800448 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800722 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run" (OuterVolumeSpecName: "var-run") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800793 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz26z\" (UniqueName: \"kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800856 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800950 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.800973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn\") pod \"ce431888-5f76-451c-a2b5-d8033c088ebd\" (UID: \"ce431888-5f76-451c-a2b5-d8033c088ebd\") " Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.801306 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.801342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.802101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.802650 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.802876 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts" (OuterVolumeSpecName: "scripts") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.807407 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z" (OuterVolumeSpecName: "kube-api-access-cz26z") pod "ce431888-5f76-451c-a2b5-d8033c088ebd" (UID: "ce431888-5f76-451c-a2b5-d8033c088ebd"). InnerVolumeSpecName "kube-api-access-cz26z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.903385 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz26z\" (UniqueName: \"kubernetes.io/projected/ce431888-5f76-451c-a2b5-d8033c088ebd-kube-api-access-cz26z\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.903455 4856 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.903470 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce431888-5f76-451c-a2b5-d8033c088ebd-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.903484 4856 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:17 crc kubenswrapper[4856]: I1203 09:29:17.903497 4856 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce431888-5f76-451c-a2b5-d8033c088ebd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.123774 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerStarted","Data":"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2"} Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.124493 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.128409 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78d2s" event={"ID":"a4614b22-bca8-4306-bd6a-26d99b904420","Type":"ContainerDied","Data":"795ae7af513846c16b049a7ed0d153dd6f3466202b1e06e0d2a372907e7ca331"} Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.128466 4856 scope.go:117] "RemoveContainer" containerID="4005a0619accda4619e80c17070b353128b66bafa2d90dd711eebcec0cb02a30" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.128622 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78d2s" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.156171 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7tm2h" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.162956 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-5cxhn" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.162991 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-5cxhn" event={"ID":"ce431888-5f76-451c-a2b5-d8033c088ebd","Type":"ContainerDied","Data":"d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255"} Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.163042 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5335acce5a06e9581802d3dc15dbafb1c2fe42737d863dbc402d8860ef12255" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.193086 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371956.661728 podStartE2EDuration="1m20.193048763s" podCreationTimestamp="2025-12-03 09:27:58 +0000 UTC" firstStartedPulling="2025-12-03 09:28:00.591371086 +0000 UTC m=+948.774263387" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:18.185253557 +0000 UTC m=+1026.368145858" watchObservedRunningTime="2025-12-03 09:29:18.193048763 +0000 UTC m=+1026.375941064" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.210889 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7tm2h-config-5cxhn"] Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.219976 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7tm2h-config-5cxhn"] Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.226497 4856 scope.go:117] "RemoveContainer" containerID="1a3257f7818488f4dd906d3634fedbd641522566e3dc627ac3e57f457cee20bb" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.246385 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.268441 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-78d2s"] Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.270285 4856 scope.go:117] "RemoveContainer" containerID="7df83b88eff09c832c7f308416cdd5ac41c3b473504a3d4c7ad15b9ba0f0c7d4" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.508214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.538987 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7tm2h-config-sgs2c"] Dec 03 09:29:18 crc kubenswrapper[4856]: E1203 09:29:18.540093 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce431888-5f76-451c-a2b5-d8033c088ebd" containerName="ovn-config" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540115 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce431888-5f76-451c-a2b5-d8033c088ebd" containerName="ovn-config" Dec 03 09:29:18 crc kubenswrapper[4856]: E1203 09:29:18.540147 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="extract-content" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540156 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="extract-content" Dec 03 09:29:18 crc kubenswrapper[4856]: E1203 09:29:18.540181 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="registry-server" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540188 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="registry-server" Dec 03 09:29:18 crc kubenswrapper[4856]: E1203 09:29:18.540208 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="extract-utilities" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540217 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="extract-utilities" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540648 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce431888-5f76-451c-a2b5-d8033c088ebd" containerName="ovn-config" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.540723 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" containerName="registry-server" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.543921 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9a29fb43-ed6d-499a-a4f7-b847de3dbf71-etc-swift\") pod \"swift-storage-0\" (UID: \"9a29fb43-ed6d-499a-a4f7-b847de3dbf71\") " pod="openstack/swift-storage-0" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.546075 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.553089 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.568263 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h-config-sgs2c"] Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.611628 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.611678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.611726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.611788 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbd74\" (UniqueName: \"kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.612558 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.612635 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.701343 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4614b22-bca8-4306-bd6a-26d99b904420" path="/var/lib/kubelet/pods/a4614b22-bca8-4306-bd6a-26d99b904420/volumes" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.702593 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce431888-5f76-451c-a2b5-d8033c088ebd" path="/var/lib/kubelet/pods/ce431888-5f76-451c-a2b5-d8033c088ebd/volumes" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbd74\" (UniqueName: \"kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718538 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.718654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.719420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.719717 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.719720 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.719829 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.727990 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.740015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbd74\" (UniqueName: \"kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74\") pod \"ovn-controller-7tm2h-config-sgs2c\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.828744 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 09:29:18 crc kubenswrapper[4856]: I1203 09:29:18.925323 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:19 crc kubenswrapper[4856]: I1203 09:29:19.417690 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 09:29:19 crc kubenswrapper[4856]: I1203 09:29:19.513110 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7tm2h-config-sgs2c"] Dec 03 09:29:19 crc kubenswrapper[4856]: W1203 09:29:19.524316 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1be7358_0d4f_4a77_84e5_fff68bcdef1f.slice/crio-b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9 WatchSource:0}: Error finding container b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9: Status 404 returned error can't find the container with id b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9 Dec 03 09:29:19 crc kubenswrapper[4856]: I1203 09:29:19.831083 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:19 crc kubenswrapper[4856]: I1203 09:29:19.832192 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:19 crc kubenswrapper[4856]: I1203 09:29:19.896024 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:20 crc kubenswrapper[4856]: I1203 09:29:20.206524 4856 generic.go:334] "Generic (PLEG): container finished" podID="e1be7358-0d4f-4a77-84e5-fff68bcdef1f" containerID="cf4e9e205f99b7f2208d58d50dd9e2ffde1364c2706d48057c071ab6ba888a0c" exitCode=0 Dec 03 09:29:20 crc kubenswrapper[4856]: I1203 09:29:20.206629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-sgs2c" event={"ID":"e1be7358-0d4f-4a77-84e5-fff68bcdef1f","Type":"ContainerDied","Data":"cf4e9e205f99b7f2208d58d50dd9e2ffde1364c2706d48057c071ab6ba888a0c"} Dec 03 09:29:20 crc kubenswrapper[4856]: I1203 09:29:20.206709 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-sgs2c" event={"ID":"e1be7358-0d4f-4a77-84e5-fff68bcdef1f","Type":"ContainerStarted","Data":"b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9"} Dec 03 09:29:20 crc kubenswrapper[4856]: I1203 09:29:20.210532 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"7d735e960d60bf630b8368edf73e0e38c4e1b58df713af32807e001d19fb5c5e"} Dec 03 09:29:20 crc kubenswrapper[4856]: I1203 09:29:20.277056 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.234870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"e2f94030c0ef5160b2745ece92c0b88e9ca2ab8ae71a88b4f1757d6cdf047269"} Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.235404 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"52bc48a470432ff5baa9245dc46bb49e15571659e361fbebc73c42354e7168ac"} Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.692395 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.695924 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891196 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891295 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891420 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891492 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891520 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbd74\" (UniqueName: \"kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74\") pod \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\" (UID: \"e1be7358-0d4f-4a77-84e5-fff68bcdef1f\") " Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891660 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.891820 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run" (OuterVolumeSpecName: "var-run") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.892670 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.892706 4856 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.892722 4856 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.892647 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.893083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts" (OuterVolumeSpecName: "scripts") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:21 crc kubenswrapper[4856]: I1203 09:29:21.901623 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74" (OuterVolumeSpecName: "kube-api-access-wbd74") pod "e1be7358-0d4f-4a77-84e5-fff68bcdef1f" (UID: "e1be7358-0d4f-4a77-84e5-fff68bcdef1f"). InnerVolumeSpecName "kube-api-access-wbd74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:21.994158 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbd74\" (UniqueName: \"kubernetes.io/projected/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-kube-api-access-wbd74\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:21.994202 4856 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:21.994215 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1be7358-0d4f-4a77-84e5-fff68bcdef1f-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.155458 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.212040 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.262400 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"71fd4f90d04620831c6c3fde774efa7a8172cb789e82a63689fbed2f2011bd8c"} Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.262451 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"2d13bbcb79978fcc741dd614040912b188435d3852db6fd33b965a9ea60cdce7"} Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.265215 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7tm2h-config-sgs2c" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.265267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7tm2h-config-sgs2c" event={"ID":"e1be7358-0d4f-4a77-84e5-fff68bcdef1f","Type":"ContainerDied","Data":"b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9"} Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.265559 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b913034ffd2104af3d99dec68ee01d17b3dda0cb316074c9ad9014989c0086d9" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.759439 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.759556 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.798997 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7tm2h-config-sgs2c"] Dec 03 09:29:22 crc kubenswrapper[4856]: I1203 09:29:22.802990 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7tm2h-config-sgs2c"] Dec 03 09:29:23 crc kubenswrapper[4856]: I1203 09:29:23.276534 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crnh4" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="registry-server" containerID="cri-o://6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" gracePeriod=2 Dec 03 09:29:24 crc kubenswrapper[4856]: I1203 09:29:24.302638 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:24 crc kubenswrapper[4856]: I1203 09:29:24.303229 4856 generic.go:334] "Generic (PLEG): container finished" podID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerID="6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" exitCode=0 Dec 03 09:29:24 crc kubenswrapper[4856]: I1203 09:29:24.303294 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerDied","Data":"6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802"} Dec 03 09:29:24 crc kubenswrapper[4856]: I1203 09:29:24.303430 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjt7g" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" containerID="cri-o://db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" gracePeriod=2 Dec 03 09:29:24 crc kubenswrapper[4856]: I1203 09:29:24.701914 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1be7358-0d4f-4a77-84e5-fff68bcdef1f" path="/var/lib/kubelet/pods/e1be7358-0d4f-4a77-84e5-fff68bcdef1f/volumes" Dec 03 09:29:25 crc kubenswrapper[4856]: I1203 09:29:25.317234 4856 generic.go:334] "Generic (PLEG): container finished" podID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerID="db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" exitCode=0 Dec 03 09:29:25 crc kubenswrapper[4856]: I1203 09:29:25.317340 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerDied","Data":"db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49"} Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.207943 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.634581 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cdnzk"] Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.635638 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1be7358-0d4f-4a77-84e5-fff68bcdef1f" containerName="ovn-config" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.635661 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1be7358-0d4f-4a77-84e5-fff68bcdef1f" containerName="ovn-config" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.635888 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1be7358-0d4f-4a77-84e5-fff68bcdef1f" containerName="ovn-config" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.636605 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.648795 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cdnzk"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.665429 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e469-account-create-update-fxbxd"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.667128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: W1203 09:29:29.670625 4856 reflector.go:561] object-"openstack"/"cinder-db-secret": failed to list *v1.Secret: secrets "cinder-db-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.670686 4856 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-db-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-db-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.675093 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.696640 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e469-account-create-update-fxbxd"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.761997 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.762315 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhsk\" (UniqueName: \"kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.762418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqgd\" (UniqueName: \"kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.762503 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.772565 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-cae5-account-create-update-mjb74"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.774285 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.785839 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.786073 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cpm64"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.787695 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.805078 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cae5-account-create-update-mjb74"] Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.837356 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802 is running failed: container process not found" containerID="6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.838359 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802 is running failed: container process not found" containerID="6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.838843 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802 is running failed: container process not found" containerID="6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:29 crc kubenswrapper[4856]: E1203 09:29:29.838902 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-crnh4" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="registry-server" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.864349 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cpm64"] Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.872765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.872968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvjx\" (UniqueName: \"kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873137 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873218 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873287 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhsk\" (UniqueName: \"kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbbr\" (UniqueName: \"kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.873402 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xqgd\" (UniqueName: \"kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.885452 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.886779 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.920896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhsk\" (UniqueName: \"kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk\") pod \"cinder-e469-account-create-update-fxbxd\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.921589 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xqgd\" (UniqueName: \"kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd\") pod \"cinder-db-create-cdnzk\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:29 crc kubenswrapper[4856]: I1203 09:29:29.964055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.038078 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.038212 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbbr\" (UniqueName: \"kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.038428 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvjx\" (UniqueName: \"kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.038580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.039423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.039674 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.041450 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.062302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbbr\" (UniqueName: \"kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr\") pod \"barbican-db-create-cpm64\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.063700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvjx\" (UniqueName: \"kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx\") pod \"barbican-cae5-account-create-update-mjb74\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.108548 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-d4pvx"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.109916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.111597 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.117051 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.117306 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.117491 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.117659 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k95wr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.126604 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.128724 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d4pvx"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.141294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfc6r\" (UniqueName: \"kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.141464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.141507 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.195134 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-chdvr"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.196307 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.208993 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chdvr"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.218684 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f497-account-create-update-qlnhj"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.220653 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.223591 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.243465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfc6r\" (UniqueName: \"kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.243600 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.243629 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.251463 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.253586 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f497-account-create-update-qlnhj"] Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.270693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.274795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfc6r\" (UniqueName: \"kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r\") pod \"keystone-db-sync-d4pvx\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.345481 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.346047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhf8\" (UniqueName: \"kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.346133 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.346579 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqc9d\" (UniqueName: \"kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.448664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhf8\" (UniqueName: \"kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.448784 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.448827 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqc9d\" (UniqueName: \"kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.448875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.449757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.449938 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.457499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.467396 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqc9d\" (UniqueName: \"kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d\") pod \"neutron-db-create-chdvr\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.467671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhf8\" (UniqueName: \"kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8\") pod \"neutron-f497-account-create-update-qlnhj\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.519875 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.546165 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:30 crc kubenswrapper[4856]: I1203 09:29:30.567347 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.094986 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49 is running failed: container process not found" containerID="db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.095796 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49 is running failed: container process not found" containerID="db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.096231 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49 is running failed: container process not found" containerID="db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.096271 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-kjt7g" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.399848 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.400662 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-zpbhz_openstack(3a337aa7-570e-40fe-86ca-faee49a09165): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:29:32 crc kubenswrapper[4856]: E1203 09:29:32.401988 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-zpbhz" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.780944 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.812304 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content\") pod \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.812459 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities\") pod \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.812551 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rltk8\" (UniqueName: \"kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8\") pod \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\" (UID: \"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2\") " Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.817021 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities" (OuterVolumeSpecName: "utilities") pod "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" (UID: "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.829136 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8" (OuterVolumeSpecName: "kube-api-access-rltk8") pod "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" (UID: "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2"). InnerVolumeSpecName "kube-api-access-rltk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.915354 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rltk8\" (UniqueName: \"kubernetes.io/projected/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-kube-api-access-rltk8\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.915396 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:32 crc kubenswrapper[4856]: I1203 09:29:32.975139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" (UID: "5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.017888 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.246535 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cdnzk"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.264499 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cpm64"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.277736 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cae5-account-create-update-mjb74"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.286983 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e469-account-create-update-fxbxd"] Dec 03 09:29:33 crc kubenswrapper[4856]: W1203 09:29:33.332859 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17424d3_bc7c_4a17_890b_5ddb43c4004b.slice/crio-dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726 WatchSource:0}: Error finding container dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726: Status 404 returned error can't find the container with id dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726 Dec 03 09:29:33 crc kubenswrapper[4856]: W1203 09:29:33.336220 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56e15650_511c_4fa4_b083_95f4c90e5833.slice/crio-0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac WatchSource:0}: Error finding container 0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac: Status 404 returned error can't find the container with id 0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac Dec 03 09:29:33 crc kubenswrapper[4856]: W1203 09:29:33.339396 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85130774_d029_4aa8_b4ab_a06843b68f0a.slice/crio-05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3 WatchSource:0}: Error finding container 05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3: Status 404 returned error can't find the container with id 05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3 Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.396517 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt7g" event={"ID":"5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2","Type":"ContainerDied","Data":"298610acf8ffa5badbfd2f60089e489282ed618997946a87bcf15dbeb646fae9"} Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.396578 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt7g" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.396622 4856 scope.go:117] "RemoveContainer" containerID="db12f3c9a2e1cc25e063546982f45dc511e464647ceb6e801ff150e935d36f49" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.460348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crnh4" event={"ID":"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a","Type":"ContainerDied","Data":"466a2582ba3320fb71a6d7b614e1084df85a0e3663bffca4b76efc809715ebf7"} Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.460446 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466a2582ba3320fb71a6d7b614e1084df85a0e3663bffca4b76efc809715ebf7" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.464591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cae5-account-create-update-mjb74" event={"ID":"85130774-d029-4aa8-b4ab-a06843b68f0a","Type":"ContainerStarted","Data":"05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3"} Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.467049 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cdnzk" event={"ID":"56e15650-511c-4fa4-b083-95f4c90e5833","Type":"ContainerStarted","Data":"0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac"} Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.469518 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e469-account-create-update-fxbxd" event={"ID":"d17424d3-bc7c-4a17-890b-5ddb43c4004b","Type":"ContainerStarted","Data":"dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726"} Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.473754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cpm64" event={"ID":"648722b8-a2df-4dac-94df-6dd3aa1db7be","Type":"ContainerStarted","Data":"d6eae483f0015af74673d7901540262495ede01afd7b2c510360455c03f06eca"} Dec 03 09:29:33 crc kubenswrapper[4856]: E1203 09:29:33.474537 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-zpbhz" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.516135 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d4pvx"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.553668 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f497-account-create-update-qlnhj"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.560736 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chdvr"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.565830 4856 scope.go:117] "RemoveContainer" containerID="98e1f0432cc88f926f60d4b8daecf0536f71858c04b027b4693698d5337d1a6a" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.586477 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:33 crc kubenswrapper[4856]: W1203 09:29:33.587599 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a7667fa_862b_4ac3_a7b1_92e336d1c63d.slice/crio-81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e WatchSource:0}: Error finding container 81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e: Status 404 returned error can't find the container with id 81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.651702 4856 scope.go:117] "RemoveContainer" containerID="1a6d14cfaf78aa2cc43a57f4498d978dd013b7f8f88d1860a085e690bdbc342e" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.653569 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content\") pod \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.653632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities\") pod \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.653949 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hhdz\" (UniqueName: \"kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz\") pod \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\" (UID: \"fecd7103-2bf5-46c2-a17e-5d5c0e31e13a\") " Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.659244 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities" (OuterVolumeSpecName: "utilities") pod "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" (UID: "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.678868 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.693587 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjt7g"] Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.694058 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz" (OuterVolumeSpecName: "kube-api-access-7hhdz") pod "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" (UID: "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a"). InnerVolumeSpecName "kube-api-access-7hhdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.731156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" (UID: "fecd7103-2bf5-46c2-a17e-5d5c0e31e13a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.757885 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hhdz\" (UniqueName: \"kubernetes.io/projected/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-kube-api-access-7hhdz\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.757936 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:33 crc kubenswrapper[4856]: I1203 09:29:33.757948 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.497966 4856 generic.go:334] "Generic (PLEG): container finished" podID="56e15650-511c-4fa4-b083-95f4c90e5833" containerID="abb8635ad093dfabd3555fb1ad751412d77bd3c30b503ce538335dcc43a91942" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.498153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cdnzk" event={"ID":"56e15650-511c-4fa4-b083-95f4c90e5833","Type":"ContainerDied","Data":"abb8635ad093dfabd3555fb1ad751412d77bd3c30b503ce538335dcc43a91942"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.502397 4856 generic.go:334] "Generic (PLEG): container finished" podID="d17424d3-bc7c-4a17-890b-5ddb43c4004b" containerID="fffc8818c4098d7e15af236c5466452bf7e309e3b4c79dbeed3433264201beea" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.502704 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e469-account-create-update-fxbxd" event={"ID":"d17424d3-bc7c-4a17-890b-5ddb43c4004b","Type":"ContainerDied","Data":"fffc8818c4098d7e15af236c5466452bf7e309e3b4c79dbeed3433264201beea"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.511876 4856 generic.go:334] "Generic (PLEG): container finished" podID="648722b8-a2df-4dac-94df-6dd3aa1db7be" containerID="4529a1191e68ea3b2598e70b001acfdfe4d15b2e330d4a446ffb625c49e6b2b0" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.511995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cpm64" event={"ID":"648722b8-a2df-4dac-94df-6dd3aa1db7be","Type":"ContainerDied","Data":"4529a1191e68ea3b2598e70b001acfdfe4d15b2e330d4a446ffb625c49e6b2b0"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.514552 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d4pvx" event={"ID":"9a7667fa-862b-4ac3-a7b1-92e336d1c63d","Type":"ContainerStarted","Data":"81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.517317 4856 generic.go:334] "Generic (PLEG): container finished" podID="9150aa96-ff02-409b-ac82-c6e3e88d54cc" containerID="a0e22dfd2895630267419d5fd9e42c8158232062f5f297c0fe93ef13cfd88a6b" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.517499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chdvr" event={"ID":"9150aa96-ff02-409b-ac82-c6e3e88d54cc","Type":"ContainerDied","Data":"a0e22dfd2895630267419d5fd9e42c8158232062f5f297c0fe93ef13cfd88a6b"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.517557 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chdvr" event={"ID":"9150aa96-ff02-409b-ac82-c6e3e88d54cc","Type":"ContainerStarted","Data":"c523d358a2ff870d897cda97061a70db698f4e22011897e990bef28dda92123b"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.521686 4856 generic.go:334] "Generic (PLEG): container finished" podID="34b57444-8f69-47dd-baf7-ecdae356f3bd" containerID="edd9961a3c83cd58c564e9958e939a141febd21774efaba92d67f6de4463d585" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.521799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f497-account-create-update-qlnhj" event={"ID":"34b57444-8f69-47dd-baf7-ecdae356f3bd","Type":"ContainerDied","Data":"edd9961a3c83cd58c564e9958e939a141febd21774efaba92d67f6de4463d585"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.521860 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f497-account-create-update-qlnhj" event={"ID":"34b57444-8f69-47dd-baf7-ecdae356f3bd","Type":"ContainerStarted","Data":"35da18727389ce43939f3be9f13ea7da097f5cbd600ea7e8e9f85e3af3a10050"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.552184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"9c9ce297412bf7205ececab89c123ca0cfb459a7ba85f815d2f54aa7c9655e79"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.552255 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"acd6df531932e5678e187947992bbc2c2087ff130b4a2043947216596aea3f38"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.555961 4856 generic.go:334] "Generic (PLEG): container finished" podID="85130774-d029-4aa8-b4ab-a06843b68f0a" containerID="496089bb145ff7a11c5cd7656787b6daa1131556a940186646fefe860d7691ae" exitCode=0 Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.556016 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cae5-account-create-update-mjb74" event={"ID":"85130774-d029-4aa8-b4ab-a06843b68f0a","Type":"ContainerDied","Data":"496089bb145ff7a11c5cd7656787b6daa1131556a940186646fefe860d7691ae"} Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.556073 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crnh4" Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.667010 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.674451 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crnh4"] Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.707728 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" path="/var/lib/kubelet/pods/5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2/volumes" Dec 03 09:29:34 crc kubenswrapper[4856]: I1203 09:29:34.708932 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" path="/var/lib/kubelet/pods/fecd7103-2bf5-46c2-a17e-5d5c0e31e13a/volumes" Dec 03 09:29:35 crc kubenswrapper[4856]: I1203 09:29:35.572143 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"a1c59065261beb903fd4f83f7b9e3d253cd62f00b5b9fd28aa6c0db5f41e9c01"} Dec 03 09:29:35 crc kubenswrapper[4856]: I1203 09:29:35.572219 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"70ba543a32a8f9d5f5320400d7da32d1939305bef8a010358a524474648e159b"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.239979 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.281002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts\") pod \"85130774-d029-4aa8-b4ab-a06843b68f0a\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.281089 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvjx\" (UniqueName: \"kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx\") pod \"85130774-d029-4aa8-b4ab-a06843b68f0a\" (UID: \"85130774-d029-4aa8-b4ab-a06843b68f0a\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.282841 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85130774-d029-4aa8-b4ab-a06843b68f0a" (UID: "85130774-d029-4aa8-b4ab-a06843b68f0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.290677 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx" (OuterVolumeSpecName: "kube-api-access-grvjx") pod "85130774-d029-4aa8-b4ab-a06843b68f0a" (UID: "85130774-d029-4aa8-b4ab-a06843b68f0a"). InnerVolumeSpecName "kube-api-access-grvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.383837 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85130774-d029-4aa8-b4ab-a06843b68f0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.383869 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvjx\" (UniqueName: \"kubernetes.io/projected/85130774-d029-4aa8-b4ab-a06843b68f0a-kube-api-access-grvjx\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.570059 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.574745 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.616327 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.618322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cpm64" event={"ID":"648722b8-a2df-4dac-94df-6dd3aa1db7be","Type":"ContainerDied","Data":"d6eae483f0015af74673d7901540262495ede01afd7b2c510360455c03f06eca"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.618352 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6eae483f0015af74673d7901540262495ede01afd7b2c510360455c03f06eca" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.623196 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chdvr" event={"ID":"9150aa96-ff02-409b-ac82-c6e3e88d54cc","Type":"ContainerDied","Data":"c523d358a2ff870d897cda97061a70db698f4e22011897e990bef28dda92123b"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.623256 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c523d358a2ff870d897cda97061a70db698f4e22011897e990bef28dda92123b" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.623361 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chdvr" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.623620 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.627348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cae5-account-create-update-mjb74" event={"ID":"85130774-d029-4aa8-b4ab-a06843b68f0a","Type":"ContainerDied","Data":"05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.627379 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a56dab5f3c21cc7dc528d45a8b3738157d501199780f6e6984a26d306c63c3" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.627391 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cae5-account-create-update-mjb74" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.634516 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cdnzk" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.634586 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cdnzk" event={"ID":"56e15650-511c-4fa4-b083-95f4c90e5833","Type":"ContainerDied","Data":"0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.634707 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7d65e86408d22889ca3d58357ef79a8b470aa01cde337a3c1de619edf70eac" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.638654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e469-account-create-update-fxbxd" event={"ID":"d17424d3-bc7c-4a17-890b-5ddb43c4004b","Type":"ContainerDied","Data":"dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.638703 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac3863eda84f2edc4933d441eab64778a3d09563b93c7859d4485ef9f5c1726" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.641682 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f497-account-create-update-qlnhj" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.641643 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f497-account-create-update-qlnhj" event={"ID":"34b57444-8f69-47dd-baf7-ecdae356f3bd","Type":"ContainerDied","Data":"35da18727389ce43939f3be9f13ea7da097f5cbd600ea7e8e9f85e3af3a10050"} Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.641855 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35da18727389ce43939f3be9f13ea7da097f5cbd600ea7e8e9f85e3af3a10050" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716300 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqc9d\" (UniqueName: \"kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d\") pod \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716362 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xqgd\" (UniqueName: \"kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd\") pod \"56e15650-511c-4fa4-b083-95f4c90e5833\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts\") pod \"648722b8-a2df-4dac-94df-6dd3aa1db7be\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts\") pod \"56e15650-511c-4fa4-b083-95f4c90e5833\" (UID: \"56e15650-511c-4fa4-b083-95f4c90e5833\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716560 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhf8\" (UniqueName: \"kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8\") pod \"34b57444-8f69-47dd-baf7-ecdae356f3bd\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716614 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts\") pod \"34b57444-8f69-47dd-baf7-ecdae356f3bd\" (UID: \"34b57444-8f69-47dd-baf7-ecdae356f3bd\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716715 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts\") pod \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\" (UID: \"9150aa96-ff02-409b-ac82-c6e3e88d54cc\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.716832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbbr\" (UniqueName: \"kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr\") pod \"648722b8-a2df-4dac-94df-6dd3aa1db7be\" (UID: \"648722b8-a2df-4dac-94df-6dd3aa1db7be\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.717325 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56e15650-511c-4fa4-b083-95f4c90e5833" (UID: "56e15650-511c-4fa4-b083-95f4c90e5833"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.718213 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b57444-8f69-47dd-baf7-ecdae356f3bd" (UID: "34b57444-8f69-47dd-baf7-ecdae356f3bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.718269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "648722b8-a2df-4dac-94df-6dd3aa1db7be" (UID: "648722b8-a2df-4dac-94df-6dd3aa1db7be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.718843 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9150aa96-ff02-409b-ac82-c6e3e88d54cc" (UID: "9150aa96-ff02-409b-ac82-c6e3e88d54cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.718894 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e15650-511c-4fa4-b083-95f4c90e5833-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.723056 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d" (OuterVolumeSpecName: "kube-api-access-bqc9d") pod "9150aa96-ff02-409b-ac82-c6e3e88d54cc" (UID: "9150aa96-ff02-409b-ac82-c6e3e88d54cc"). InnerVolumeSpecName "kube-api-access-bqc9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.723438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr" (OuterVolumeSpecName: "kube-api-access-nlbbr") pod "648722b8-a2df-4dac-94df-6dd3aa1db7be" (UID: "648722b8-a2df-4dac-94df-6dd3aa1db7be"). InnerVolumeSpecName "kube-api-access-nlbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.724005 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8" (OuterVolumeSpecName: "kube-api-access-swhf8") pod "34b57444-8f69-47dd-baf7-ecdae356f3bd" (UID: "34b57444-8f69-47dd-baf7-ecdae356f3bd"). InnerVolumeSpecName "kube-api-access-swhf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.725120 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd" (OuterVolumeSpecName: "kube-api-access-7xqgd") pod "56e15650-511c-4fa4-b083-95f4c90e5833" (UID: "56e15650-511c-4fa4-b083-95f4c90e5833"). InnerVolumeSpecName "kube-api-access-7xqgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.735869 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.819577 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhsk\" (UniqueName: \"kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk\") pod \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.821601 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts\") pod \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\" (UID: \"d17424d3-bc7c-4a17-890b-5ddb43c4004b\") " Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.822913 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqc9d\" (UniqueName: \"kubernetes.io/projected/9150aa96-ff02-409b-ac82-c6e3e88d54cc-kube-api-access-bqc9d\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.824712 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xqgd\" (UniqueName: \"kubernetes.io/projected/56e15650-511c-4fa4-b083-95f4c90e5833-kube-api-access-7xqgd\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.825290 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/648722b8-a2df-4dac-94df-6dd3aa1db7be-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.825369 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhf8\" (UniqueName: \"kubernetes.io/projected/34b57444-8f69-47dd-baf7-ecdae356f3bd-kube-api-access-swhf8\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.823244 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d17424d3-bc7c-4a17-890b-5ddb43c4004b" (UID: "d17424d3-bc7c-4a17-890b-5ddb43c4004b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.825433 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b57444-8f69-47dd-baf7-ecdae356f3bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.825525 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9150aa96-ff02-409b-ac82-c6e3e88d54cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.825547 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbbr\" (UniqueName: \"kubernetes.io/projected/648722b8-a2df-4dac-94df-6dd3aa1db7be-kube-api-access-nlbbr\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.829709 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk" (OuterVolumeSpecName: "kube-api-access-nzhsk") pod "d17424d3-bc7c-4a17-890b-5ddb43c4004b" (UID: "d17424d3-bc7c-4a17-890b-5ddb43c4004b"). InnerVolumeSpecName "kube-api-access-nzhsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.927696 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhsk\" (UniqueName: \"kubernetes.io/projected/d17424d3-bc7c-4a17-890b-5ddb43c4004b-kube-api-access-nzhsk\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:38 crc kubenswrapper[4856]: I1203 09:29:38.928220 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17424d3-bc7c-4a17-890b-5ddb43c4004b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.672353 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d4pvx" event={"ID":"9a7667fa-862b-4ac3-a7b1-92e336d1c63d","Type":"ContainerStarted","Data":"e1bdb6d8f4f2c275ca11b7cadf14654ac4813526bb5d66bea88785b8691b9d1b"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.684924 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e469-account-create-update-fxbxd" Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.685741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"f3defe10d58ee1a71462624b19e409f60b582fc54c5059f6ff07ff4a48993c57"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.686047 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"7e7d82fb45b387ba5cc0b70a2cbad97d215e147e3639266a573ac844128dcd9e"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.686074 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"2e7fdd017861ee6dc7df1ecfe51bae4e25c367adfc47edea13a29a2ca3092457"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.686086 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"c2ef69e312f66e75ebb4be9317437c7769f919e3c96d037bc050096a36f376cf"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.686098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"a197ca33f8b58ec5f11019209100b7e96a049b9c1239c3c0b3494474bfaf85fe"} Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.686088 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cpm64" Dec 03 09:29:39 crc kubenswrapper[4856]: I1203 09:29:39.708357 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-d4pvx" podStartSLOduration=4.779844393 podStartE2EDuration="9.708336504s" podCreationTimestamp="2025-12-03 09:29:30 +0000 UTC" firstStartedPulling="2025-12-03 09:29:33.591756238 +0000 UTC m=+1041.774648539" lastFinishedPulling="2025-12-03 09:29:38.520248349 +0000 UTC m=+1046.703140650" observedRunningTime="2025-12-03 09:29:39.697688295 +0000 UTC m=+1047.880580606" watchObservedRunningTime="2025-12-03 09:29:39.708336504 +0000 UTC m=+1047.891228805" Dec 03 09:29:40 crc kubenswrapper[4856]: I1203 09:29:40.721289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"13b3f1e64188556d88daa7cb0ce645c83b402979a231cc1f779aec808799cfb0"} Dec 03 09:29:40 crc kubenswrapper[4856]: I1203 09:29:40.721340 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9a29fb43-ed6d-499a-a4f7-b847de3dbf71","Type":"ContainerStarted","Data":"8efc3b9a27cefe8511d5bcfc6232fc1273b4e0ebe4f3e4563915d4c039065133"} Dec 03 09:29:40 crc kubenswrapper[4856]: I1203 09:29:40.770041 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.693946342 podStartE2EDuration="55.770007281s" podCreationTimestamp="2025-12-03 09:28:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:19.438894361 +0000 UTC m=+1027.621786662" lastFinishedPulling="2025-12-03 09:29:38.5149553 +0000 UTC m=+1046.697847601" observedRunningTime="2025-12-03 09:29:40.756109294 +0000 UTC m=+1048.939001595" watchObservedRunningTime="2025-12-03 09:29:40.770007281 +0000 UTC m=+1048.952899572" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.037434 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038001 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="extract-content" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038018 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="extract-content" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038036 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="extract-utilities" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038043 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="extract-utilities" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038058 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9150aa96-ff02-409b-ac82-c6e3e88d54cc" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038064 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9150aa96-ff02-409b-ac82-c6e3e88d54cc" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038077 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e15650-511c-4fa4-b083-95f4c90e5833" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038085 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e15650-511c-4fa4-b083-95f4c90e5833" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038095 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648722b8-a2df-4dac-94df-6dd3aa1db7be" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038102 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="648722b8-a2df-4dac-94df-6dd3aa1db7be" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038112 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17424d3-bc7c-4a17-890b-5ddb43c4004b" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038118 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17424d3-bc7c-4a17-890b-5ddb43c4004b" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038131 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85130774-d029-4aa8-b4ab-a06843b68f0a" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038137 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="85130774-d029-4aa8-b4ab-a06843b68f0a" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038149 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038155 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038170 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b57444-8f69-47dd-baf7-ecdae356f3bd" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038177 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b57444-8f69-47dd-baf7-ecdae356f3bd" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038185 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038190 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038201 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="extract-content" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038207 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="extract-content" Dec 03 09:29:41 crc kubenswrapper[4856]: E1203 09:29:41.038220 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="extract-utilities" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038226 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="extract-utilities" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038402 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="648722b8-a2df-4dac-94df-6dd3aa1db7be" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038415 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9150aa96-ff02-409b-ac82-c6e3e88d54cc" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038428 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecd7103-2bf5-46c2-a17e-5d5c0e31e13a" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038439 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0bd46f-8cf2-40b2-8d10-8d2d3d064af2" containerName="registry-server" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038451 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17424d3-bc7c-4a17-890b-5ddb43c4004b" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038461 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e15650-511c-4fa4-b083-95f4c90e5833" containerName="mariadb-database-create" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038473 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="85130774-d029-4aa8-b4ab-a06843b68f0a" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.038487 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b57444-8f69-47dd-baf7-ecdae356f3bd" containerName="mariadb-account-create-update" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.048865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.057504 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.087540 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174129 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174205 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlx2\" (UniqueName: \"kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174416 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174700 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.174737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278172 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.276708 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278346 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.278448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlx2\" (UniqueName: \"kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.279656 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.280291 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.280945 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.281567 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.304030 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlx2\" (UniqueName: \"kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2\") pod \"dnsmasq-dns-5c79d794d7-6jllk\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.385971 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:41 crc kubenswrapper[4856]: W1203 09:29:41.862613 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb593412_a8af_4a63_a5a1_ac5d7dec706b.slice/crio-bc9efaa02786b4e963def8bb0589107b2fa8c473bd77abd4b4e022310dc87b80 WatchSource:0}: Error finding container bc9efaa02786b4e963def8bb0589107b2fa8c473bd77abd4b4e022310dc87b80: Status 404 returned error can't find the container with id bc9efaa02786b4e963def8bb0589107b2fa8c473bd77abd4b4e022310dc87b80 Dec 03 09:29:41 crc kubenswrapper[4856]: I1203 09:29:41.865971 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:42 crc kubenswrapper[4856]: I1203 09:29:42.741256 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a7667fa-862b-4ac3-a7b1-92e336d1c63d" containerID="e1bdb6d8f4f2c275ca11b7cadf14654ac4813526bb5d66bea88785b8691b9d1b" exitCode=0 Dec 03 09:29:42 crc kubenswrapper[4856]: I1203 09:29:42.741359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d4pvx" event={"ID":"9a7667fa-862b-4ac3-a7b1-92e336d1c63d","Type":"ContainerDied","Data":"e1bdb6d8f4f2c275ca11b7cadf14654ac4813526bb5d66bea88785b8691b9d1b"} Dec 03 09:29:42 crc kubenswrapper[4856]: I1203 09:29:42.743245 4856 generic.go:334] "Generic (PLEG): container finished" podID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerID="46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b" exitCode=0 Dec 03 09:29:42 crc kubenswrapper[4856]: I1203 09:29:42.743290 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" event={"ID":"eb593412-a8af-4a63-a5a1-ac5d7dec706b","Type":"ContainerDied","Data":"46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b"} Dec 03 09:29:42 crc kubenswrapper[4856]: I1203 09:29:42.743320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" event={"ID":"eb593412-a8af-4a63-a5a1-ac5d7dec706b","Type":"ContainerStarted","Data":"bc9efaa02786b4e963def8bb0589107b2fa8c473bd77abd4b4e022310dc87b80"} Dec 03 09:29:43 crc kubenswrapper[4856]: I1203 09:29:43.757829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" event={"ID":"eb593412-a8af-4a63-a5a1-ac5d7dec706b","Type":"ContainerStarted","Data":"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286"} Dec 03 09:29:43 crc kubenswrapper[4856]: I1203 09:29:43.758700 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:43 crc kubenswrapper[4856]: I1203 09:29:43.795649 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" podStartSLOduration=2.7956266469999997 podStartE2EDuration="2.795626647s" podCreationTimestamp="2025-12-03 09:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:43.787197132 +0000 UTC m=+1051.970089473" watchObservedRunningTime="2025-12-03 09:29:43.795626647 +0000 UTC m=+1051.978518968" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.100607 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.145304 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfc6r\" (UniqueName: \"kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r\") pod \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.145377 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data\") pod \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.145638 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle\") pod \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\" (UID: \"9a7667fa-862b-4ac3-a7b1-92e336d1c63d\") " Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.152236 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r" (OuterVolumeSpecName: "kube-api-access-jfc6r") pod "9a7667fa-862b-4ac3-a7b1-92e336d1c63d" (UID: "9a7667fa-862b-4ac3-a7b1-92e336d1c63d"). InnerVolumeSpecName "kube-api-access-jfc6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.191967 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a7667fa-862b-4ac3-a7b1-92e336d1c63d" (UID: "9a7667fa-862b-4ac3-a7b1-92e336d1c63d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.198137 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data" (OuterVolumeSpecName: "config-data") pod "9a7667fa-862b-4ac3-a7b1-92e336d1c63d" (UID: "9a7667fa-862b-4ac3-a7b1-92e336d1c63d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.247692 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.247728 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfc6r\" (UniqueName: \"kubernetes.io/projected/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-kube-api-access-jfc6r\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.247741 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7667fa-862b-4ac3-a7b1-92e336d1c63d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.774289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d4pvx" event={"ID":"9a7667fa-862b-4ac3-a7b1-92e336d1c63d","Type":"ContainerDied","Data":"81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e"} Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.774334 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d4pvx" Dec 03 09:29:44 crc kubenswrapper[4856]: I1203 09:29:44.774363 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81360452825a334ed119830dea55fe08f38be747faa3cefc52eb72551502821e" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.039846 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wttb4"] Dec 03 09:29:45 crc kubenswrapper[4856]: E1203 09:29:45.040410 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7667fa-862b-4ac3-a7b1-92e336d1c63d" containerName="keystone-db-sync" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.040434 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7667fa-862b-4ac3-a7b1-92e336d1c63d" containerName="keystone-db-sync" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.040670 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7667fa-862b-4ac3-a7b1-92e336d1c63d" containerName="keystone-db-sync" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.042045 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.049315 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.049414 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k95wr" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.049921 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.050113 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.050540 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.076120 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wttb4"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.087945 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.147039 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.148556 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.169739 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.169996 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.170103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.170208 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7p6\" (UniqueName: \"kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.170297 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.170429 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.172118 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273774 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273838 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7mk\" (UniqueName: \"kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273905 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273929 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.273977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.274019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt7p6\" (UniqueName: \"kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.274047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.303204 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.312938 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.313272 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.320441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.323579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.336740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt7p6\" (UniqueName: \"kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6\") pod \"keystone-bootstrap-wttb4\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.365433 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377303 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7mk\" (UniqueName: \"kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377418 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.377438 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.383739 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.384437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.394746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.395537 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.405756 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.417901 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.420402 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.450504 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7mk\" (UniqueName: \"kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk\") pod \"dnsmasq-dns-5b868669f-srl54\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.461032 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.461398 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.461465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-gwdbl" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.461562 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.472898 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.481174 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.481231 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z2g\" (UniqueName: \"kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.481270 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.481294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.481369 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.499096 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vxpjp"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.500958 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.527486 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.527830 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.528005 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rwwvc" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.552924 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.583557 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.583624 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.583677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.583765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.583976 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.584008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z42zz\" (UniqueName: \"kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.584036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2z2g\" (UniqueName: \"kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.584060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.584464 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.585728 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vxpjp"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.592735 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.603632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.587441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.624942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2z2g\" (UniqueName: \"kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g\") pod \"horizon-5c6d6478fc-xjpl5\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.688848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z42zz\" (UniqueName: \"kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.688947 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.689046 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.705904 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.708234 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.709469 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.718684 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.725426 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.735096 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.747436 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jpsww"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.747626 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z42zz\" (UniqueName: \"kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz\") pod \"neutron-db-sync-vxpjp\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.749036 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.764082 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.764389 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.764554 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-td4fj" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.811958 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.820359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.826846 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.826897 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.826975 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.827235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j7m5\" (UniqueName: \"kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.827578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.827725 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.827944 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j574\" (UniqueName: \"kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.828028 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.828322 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.828361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.828552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.828578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.888703 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.891959 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="dnsmasq-dns" containerID="cri-o://ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286" gracePeriod=10 Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.925994 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945144 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945367 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945416 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945631 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945725 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945764 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.945793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j7m5\" (UniqueName: \"kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.950650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.957889 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.958069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.958139 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j574\" (UniqueName: \"kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.968016 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.968093 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.972176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.976107 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jpsww"] Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.985127 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.993358 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j574\" (UniqueName: \"kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:45 crc kubenswrapper[4856]: I1203 09:29:45.996892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.001661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.009202 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j7m5\" (UniqueName: \"kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.011449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.047363 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2dnjq"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.053279 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.056042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data\") pod \"cinder-db-sync-jpsww\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.064233 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.073018 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.073396 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ldgm7" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.074390 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts\") pod \"ceilometer-0\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " pod="openstack/ceilometer-0" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.078897 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.080051 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l9lg2"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.083668 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.089896 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xjbjr" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.090364 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.107938 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2dnjq"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.116780 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l9lg2"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.158103 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jpsww" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.158904 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171159 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171319 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zkj2\" (UniqueName: \"kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171414 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171433 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171516 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.171665 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shnl\" (UniqueName: \"kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.193016 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.197648 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.202142 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.210479 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.221686 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.225015 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273685 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273729 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2shnl\" (UniqueName: \"kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptf7n\" (UniqueName: \"kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273852 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.273958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zkj2\" (UniqueName: \"kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.274002 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.274019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.275378 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.288407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.296771 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.299030 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.321535 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.321930 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.323174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zkj2\" (UniqueName: \"kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2\") pod \"placement-db-sync-2dnjq\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.349244 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2shnl\" (UniqueName: \"kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl\") pod \"barbican-db-sync-l9lg2\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.358300 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377462 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377517 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8vm\" (UniqueName: \"kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377575 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377639 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377677 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377702 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377731 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptf7n\" (UniqueName: \"kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377785 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.377985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.379304 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.380185 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.380732 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.382504 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.391223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.405201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptf7n\" (UniqueName: \"kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n\") pod \"dnsmasq-dns-cf78879c9-g7ssp\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.480638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.480747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.480824 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.480854 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8vm\" (UniqueName: \"kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.480927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.481966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.482251 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.484313 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.489153 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.504630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8vm\" (UniqueName: \"kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm\") pod \"horizon-764b5497d9-dcts9\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.507786 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.507970 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dnjq" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.533173 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wttb4"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.554474 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.591907 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:46 crc kubenswrapper[4856]: W1203 09:29:46.839111 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7070b51_710e_44fc_aa4b_11bdb4ec0e25.slice/crio-abc80fe0434d81f8902584fc76faa176f1324a8e97431d9f0a7dbb282cd15fb9 WatchSource:0}: Error finding container abc80fe0434d81f8902584fc76faa176f1324a8e97431d9f0a7dbb282cd15fb9: Status 404 returned error can't find the container with id abc80fe0434d81f8902584fc76faa176f1324a8e97431d9f0a7dbb282cd15fb9 Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.842976 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.855458 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.923546 4856 generic.go:334] "Generic (PLEG): container finished" podID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerID="ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286" exitCode=0 Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.924160 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" event={"ID":"eb593412-a8af-4a63-a5a1-ac5d7dec706b","Type":"ContainerDied","Data":"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286"} Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.924203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" event={"ID":"eb593412-a8af-4a63-a5a1-ac5d7dec706b","Type":"ContainerDied","Data":"bc9efaa02786b4e963def8bb0589107b2fa8c473bd77abd4b4e022310dc87b80"} Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.924227 4856 scope.go:117] "RemoveContainer" containerID="ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.924400 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-6jllk" Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.937065 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-srl54" event={"ID":"e7070b51-710e-44fc-aa4b-11bdb4ec0e25","Type":"ContainerStarted","Data":"abc80fe0434d81f8902584fc76faa176f1324a8e97431d9f0a7dbb282cd15fb9"} Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.940622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wttb4" event={"ID":"6c624d83-5d35-4f56-a482-1a6c3f422f19","Type":"ContainerStarted","Data":"f203ff6d7fc4c86e7c2d37d51c218ef8c9ec305ac3535847732f0be48a1ccfff"} Dec 03 09:29:46 crc kubenswrapper[4856]: I1203 09:29:46.981284 4856 scope.go:117] "RemoveContainer" containerID="46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021554 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021661 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021738 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlx2\" (UniqueName: \"kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021770 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.021889 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb\") pod \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\" (UID: \"eb593412-a8af-4a63-a5a1-ac5d7dec706b\") " Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.040003 4856 scope.go:117] "RemoveContainer" containerID="ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286" Dec 03 09:29:47 crc kubenswrapper[4856]: E1203 09:29:47.041322 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286\": container with ID starting with ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286 not found: ID does not exist" containerID="ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.041435 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286"} err="failed to get container status \"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286\": rpc error: code = NotFound desc = could not find container \"ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286\": container with ID starting with ca577d1fdbaaff24d67a2df8a8d843850c334b419c0a9551cc08e955356fe286 not found: ID does not exist" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.041534 4856 scope.go:117] "RemoveContainer" containerID="46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b" Dec 03 09:29:47 crc kubenswrapper[4856]: E1203 09:29:47.042887 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b\": container with ID starting with 46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b not found: ID does not exist" containerID="46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.042961 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b"} err="failed to get container status \"46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b\": rpc error: code = NotFound desc = could not find container \"46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b\": container with ID starting with 46ca43c66fd242682e62bbc958170ef07ec17d49d11d87f342d88d2b0426c80b not found: ID does not exist" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.047898 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2" (OuterVolumeSpecName: "kube-api-access-fdlx2") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "kube-api-access-fdlx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.124832 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlx2\" (UniqueName: \"kubernetes.io/projected/eb593412-a8af-4a63-a5a1-ac5d7dec706b-kube-api-access-fdlx2\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.157295 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vxpjp"] Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.178883 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jpsww"] Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.193855 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:47 crc kubenswrapper[4856]: W1203 09:29:47.195555 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab20ccb_73a3_4d7d_ba56_f778d4cee6d0.slice/crio-2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287 WatchSource:0}: Error finding container 2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287: Status 404 returned error can't find the container with id 2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287 Dec 03 09:29:47 crc kubenswrapper[4856]: I1203 09:29:47.198721 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.228978 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: W1203 09:29:47.257457 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cadb6e2_105e_4b63_afeb_77fe7030b233.slice/crio-64b4ef4935e275053f5e76c661d4a3d48b987fa31472c19c9d15af26f36f7f7b WatchSource:0}: Error finding container 64b4ef4935e275053f5e76c661d4a3d48b987fa31472c19c9d15af26f36f7f7b: Status 404 returned error can't find the container with id 64b4ef4935e275053f5e76c661d4a3d48b987fa31472c19c9d15af26f36f7f7b Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.302319 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:29:48 crc kubenswrapper[4856]: W1203 09:29:47.371726 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd683d201_f027_4492_966c_95fa0e5004cd.slice/crio-ff6b78251b071bd45823f4c3bd24de9f20dc1b24ebc3698d47aed4a6f9e106f1 WatchSource:0}: Error finding container ff6b78251b071bd45823f4c3bd24de9f20dc1b24ebc3698d47aed4a6f9e106f1: Status 404 returned error can't find the container with id ff6b78251b071bd45823f4c3bd24de9f20dc1b24ebc3698d47aed4a6f9e106f1 Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.397983 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l9lg2"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.409837 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config" (OuterVolumeSpecName: "config") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.410304 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.414727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.421000 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2dnjq"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.440269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb593412-a8af-4a63-a5a1-ac5d7dec706b" (UID: "eb593412-a8af-4a63-a5a1-ac5d7dec706b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.446647 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.446675 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.446690 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.446702 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb593412-a8af-4a63-a5a1-ac5d7dec706b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.508165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.570622 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.848281 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.857916 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-6jllk"] Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.965676 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dnjq" event={"ID":"17fa448b-f085-4377-a7d2-a4e078ae00c3","Type":"ContainerStarted","Data":"cb3aacb4d72c88735ba2c13c428561eaf0f2a073c23c583a627c341a2b7eaf12"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.973822 4856 generic.go:334] "Generic (PLEG): container finished" podID="e7070b51-710e-44fc-aa4b-11bdb4ec0e25" containerID="02809bdf24e8148063d28ad665b9a7b1c6be0852d74462d4045decaccc89495d" exitCode=0 Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.973899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-srl54" event={"ID":"e7070b51-710e-44fc-aa4b-11bdb4ec0e25","Type":"ContainerDied","Data":"02809bdf24e8148063d28ad665b9a7b1c6be0852d74462d4045decaccc89495d"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.978605 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9lg2" event={"ID":"a3f84948-d98d-443d-9055-b4f4d28369b4","Type":"ContainerStarted","Data":"730245bd63fa1744d67ed792d04719d551cfac2708335df18207a97649f98bb9"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.987777 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jpsww" event={"ID":"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b","Type":"ContainerStarted","Data":"de8bfd4ca7ffb1d5ada8173f52774ead63b497db6f789ecfec9bae839d66395b"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.989025 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerStarted","Data":"ff6b78251b071bd45823f4c3bd24de9f20dc1b24ebc3698d47aed4a6f9e106f1"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.991329 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wttb4" event={"ID":"6c624d83-5d35-4f56-a482-1a6c3f422f19","Type":"ContainerStarted","Data":"db18575320ee58947811506695d259f2d60012e0f2e4bf0a9febe1f1c7c67602"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.993401 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerStarted","Data":"8e8635cfe69ccdaac967df1b2bbfb45b043611922258d66faa9a47de1df1c227"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.993442 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerStarted","Data":"625432ff85352cf04a2948858416bda64a118628889880f277ab380ee62eb21f"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.994295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerStarted","Data":"e71ed2fd88d953b2f8b8ab964d7adf78e78752adbba798390a2c941659b0daed"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:47.997645 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerStarted","Data":"64b4ef4935e275053f5e76c661d4a3d48b987fa31472c19c9d15af26f36f7f7b"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.005358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxpjp" event={"ID":"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0","Type":"ContainerStarted","Data":"2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287"} Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.038075 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wttb4" podStartSLOduration=3.038041143 podStartE2EDuration="3.038041143s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:48.021234315 +0000 UTC m=+1056.204126626" watchObservedRunningTime="2025-12-03 09:29:48.038041143 +0000 UTC m=+1056.220933454" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.093773 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vxpjp" podStartSLOduration=3.093746115 podStartE2EDuration="3.093746115s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:48.076489546 +0000 UTC m=+1056.259381847" watchObservedRunningTime="2025-12-03 09:29:48.093746115 +0000 UTC m=+1056.276638416" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.433663 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.591876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.591972 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g7mk\" (UniqueName: \"kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.592132 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.592210 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.592381 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.592449 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc\") pod \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\" (UID: \"e7070b51-710e-44fc-aa4b-11bdb4ec0e25\") " Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.624488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config" (OuterVolumeSpecName: "config") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.628626 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk" (OuterVolumeSpecName: "kube-api-access-9g7mk") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "kube-api-access-9g7mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.632206 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.636637 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.639267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.646076 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7070b51-710e-44fc-aa4b-11bdb4ec0e25" (UID: "e7070b51-710e-44fc-aa4b-11bdb4ec0e25"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695867 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695902 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695914 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695926 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695938 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.695949 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g7mk\" (UniqueName: \"kubernetes.io/projected/e7070b51-710e-44fc-aa4b-11bdb4ec0e25-kube-api-access-9g7mk\") on node \"crc\" DevicePath \"\"" Dec 03 09:29:48 crc kubenswrapper[4856]: I1203 09:29:48.732647 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" path="/var/lib/kubelet/pods/eb593412-a8af-4a63-a5a1-ac5d7dec706b/volumes" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.040657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-srl54" event={"ID":"e7070b51-710e-44fc-aa4b-11bdb4ec0e25","Type":"ContainerDied","Data":"abc80fe0434d81f8902584fc76faa176f1324a8e97431d9f0a7dbb282cd15fb9"} Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.040711 4856 scope.go:117] "RemoveContainer" containerID="02809bdf24e8148063d28ad665b9a7b1c6be0852d74462d4045decaccc89495d" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.040846 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-srl54" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.046523 4856 generic.go:334] "Generic (PLEG): container finished" podID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerID="8e8635cfe69ccdaac967df1b2bbfb45b043611922258d66faa9a47de1df1c227" exitCode=0 Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.046647 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerDied","Data":"8e8635cfe69ccdaac967df1b2bbfb45b043611922258d66faa9a47de1df1c227"} Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.046745 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerStarted","Data":"3ec631465d7275912bc9cf2c9740db5f9372aaa229307f64206922774d1c0820"} Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.048157 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.054767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxpjp" event={"ID":"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0","Type":"ContainerStarted","Data":"87d0121ed930968e361a811d3a8f9e0024155ec9e7462791a5ceb1a36a1128db"} Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.114471 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" podStartSLOduration=4.114436107 podStartE2EDuration="4.114436107s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:29:49.080647627 +0000 UTC m=+1057.263539938" watchObservedRunningTime="2025-12-03 09:29:49.114436107 +0000 UTC m=+1057.297328428" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.164707 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.178229 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-srl54"] Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.449848 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.533733 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:29:49 crc kubenswrapper[4856]: E1203 09:29:49.534690 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="dnsmasq-dns" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.534717 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="dnsmasq-dns" Dec 03 09:29:49 crc kubenswrapper[4856]: E1203 09:29:49.534738 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7070b51-710e-44fc-aa4b-11bdb4ec0e25" containerName="init" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.534748 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7070b51-710e-44fc-aa4b-11bdb4ec0e25" containerName="init" Dec 03 09:29:49 crc kubenswrapper[4856]: E1203 09:29:49.534764 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="init" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.534771 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="init" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.535058 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb593412-a8af-4a63-a5a1-ac5d7dec706b" containerName="dnsmasq-dns" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.535084 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7070b51-710e-44fc-aa4b-11bdb4ec0e25" containerName="init" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.536494 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.572695 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.583343 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.630173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.630254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.630519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.630981 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jd5\" (UniqueName: \"kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.631076 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.733966 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.734063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jd5\" (UniqueName: \"kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.734087 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.734135 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.734171 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.735621 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.735920 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.742989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.744652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.758279 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jd5\" (UniqueName: \"kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5\") pod \"horizon-78fd484f8f-4w5ks\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:49 crc kubenswrapper[4856]: I1203 09:29:49.901004 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:29:50 crc kubenswrapper[4856]: I1203 09:29:50.571956 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:29:50 crc kubenswrapper[4856]: W1203 09:29:50.603925 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod452932c0_23f1_40ba_8dd8_121eff6a2ea6.slice/crio-6bb14757922a603191359540c0c04aa9445a0918ce87f89f847eb6b1e3ecbed4 WatchSource:0}: Error finding container 6bb14757922a603191359540c0c04aa9445a0918ce87f89f847eb6b1e3ecbed4: Status 404 returned error can't find the container with id 6bb14757922a603191359540c0c04aa9445a0918ce87f89f847eb6b1e3ecbed4 Dec 03 09:29:50 crc kubenswrapper[4856]: I1203 09:29:50.734742 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7070b51-710e-44fc-aa4b-11bdb4ec0e25" path="/var/lib/kubelet/pods/e7070b51-710e-44fc-aa4b-11bdb4ec0e25/volumes" Dec 03 09:29:51 crc kubenswrapper[4856]: I1203 09:29:51.385550 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zpbhz" event={"ID":"3a337aa7-570e-40fe-86ca-faee49a09165","Type":"ContainerStarted","Data":"17538324df8bc7553ce59c1956db2e2d74a61061088255c4a13b078565f90ccb"} Dec 03 09:29:51 crc kubenswrapper[4856]: I1203 09:29:51.389495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerStarted","Data":"6bb14757922a603191359540c0c04aa9445a0918ce87f89f847eb6b1e3ecbed4"} Dec 03 09:29:51 crc kubenswrapper[4856]: I1203 09:29:51.407042 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zpbhz" podStartSLOduration=3.526671721 podStartE2EDuration="38.407009678s" podCreationTimestamp="2025-12-03 09:29:13 +0000 UTC" firstStartedPulling="2025-12-03 09:29:14.414910417 +0000 UTC m=+1022.597802718" lastFinishedPulling="2025-12-03 09:29:49.295248374 +0000 UTC m=+1057.478140675" observedRunningTime="2025-12-03 09:29:51.40584705 +0000 UTC m=+1059.588739351" watchObservedRunningTime="2025-12-03 09:29:51.407009678 +0000 UTC m=+1059.589901989" Dec 03 09:29:52 crc kubenswrapper[4856]: I1203 09:29:52.759382 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:29:52 crc kubenswrapper[4856]: I1203 09:29:52.759719 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:29:52 crc kubenswrapper[4856]: I1203 09:29:52.759774 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:29:52 crc kubenswrapper[4856]: I1203 09:29:52.760622 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:29:52 crc kubenswrapper[4856]: I1203 09:29:52.760682 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92" gracePeriod=600 Dec 03 09:29:53 crc kubenswrapper[4856]: I1203 09:29:53.431771 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92" exitCode=0 Dec 03 09:29:53 crc kubenswrapper[4856]: I1203 09:29:53.431867 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92"} Dec 03 09:29:53 crc kubenswrapper[4856]: I1203 09:29:53.432488 4856 scope.go:117] "RemoveContainer" containerID="4afd4c86b5fa95dc80aeac436708ba8256d79892daf553ea2ccc017832ea0fc1" Dec 03 09:29:54 crc kubenswrapper[4856]: I1203 09:29:54.445410 4856 generic.go:334] "Generic (PLEG): container finished" podID="6c624d83-5d35-4f56-a482-1a6c3f422f19" containerID="db18575320ee58947811506695d259f2d60012e0f2e4bf0a9febe1f1c7c67602" exitCode=0 Dec 03 09:29:54 crc kubenswrapper[4856]: I1203 09:29:54.445482 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wttb4" event={"ID":"6c624d83-5d35-4f56-a482-1a6c3f422f19","Type":"ContainerDied","Data":"db18575320ee58947811506695d259f2d60012e0f2e4bf0a9febe1f1c7c67602"} Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.181430 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.217423 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.219666 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.225198 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.239590 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.292832 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.304908 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305055 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtlh\" (UniqueName: \"kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.305258 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.333767 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-569f95b-qhsts"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.337311 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.357560 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-569f95b-qhsts"] Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.407647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-secret-key\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.407738 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.407999 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408119 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqzf\" (UniqueName: \"kubernetes.io/projected/7a3ced31-90f7-4932-999e-49e914166624-kube-api-access-dnqzf\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-tls-certs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-scripts\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408461 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3ced31-90f7-4932-999e-49e914166624-logs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-combined-ca-bundle\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408838 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtlh\" (UniqueName: \"kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.408924 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.409002 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.409047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-config-data\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.409076 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.409703 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.420848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.511166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnqzf\" (UniqueName: \"kubernetes.io/projected/7a3ced31-90f7-4932-999e-49e914166624-kube-api-access-dnqzf\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.511821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-tls-certs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.511923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-scripts\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.511984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3ced31-90f7-4932-999e-49e914166624-logs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.512034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-combined-ca-bundle\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.512094 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-config-data\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.512179 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-secret-key\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.544485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.545419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.546449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-scripts\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.549896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3ced31-90f7-4932-999e-49e914166624-logs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.552346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a3ced31-90f7-4932-999e-49e914166624-config-data\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.553101 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtlh\" (UniqueName: \"kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh\") pod \"horizon-777b75cf48-68qq9\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.553354 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-secret-key\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.555096 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-combined-ca-bundle\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.556091 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3ced31-90f7-4932-999e-49e914166624-horizon-tls-certs\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.582917 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnqzf\" (UniqueName: \"kubernetes.io/projected/7a3ced31-90f7-4932-999e-49e914166624-kube-api-access-dnqzf\") pod \"horizon-569f95b-qhsts\" (UID: \"7a3ced31-90f7-4932-999e-49e914166624\") " pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.628567 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:29:55 crc kubenswrapper[4856]: I1203 09:29:55.668650 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:29:56 crc kubenswrapper[4856]: I1203 09:29:56.594148 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:29:56 crc kubenswrapper[4856]: I1203 09:29:56.697568 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:29:56 crc kubenswrapper[4856]: I1203 09:29:56.697926 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" containerID="cri-o://2662eb87672045bc0806594455099673e165c6f40f3019a3806f18d45b8366fa" gracePeriod=10 Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.157522 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8"] Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.160523 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.163828 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.164242 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.168173 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8"] Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.242640 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.243201 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.243686 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pk8\" (UniqueName: \"kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.348315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pk8\" (UniqueName: \"kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.348632 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.348738 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.349922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.357248 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.367055 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pk8\" (UniqueName: \"kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8\") pod \"collect-profiles-29412570-2lcr8\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.486874 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.520586 4856 generic.go:334] "Generic (PLEG): container finished" podID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerID="2662eb87672045bc0806594455099673e165c6f40f3019a3806f18d45b8366fa" exitCode=0 Dec 03 09:30:00 crc kubenswrapper[4856]: I1203 09:30:00.520628 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" event={"ID":"996c5c67-0f32-4d87-b31b-35e36f3d7b67","Type":"ContainerDied","Data":"2662eb87672045bc0806594455099673e165c6f40f3019a3806f18d45b8366fa"} Dec 03 09:30:01 crc kubenswrapper[4856]: I1203 09:30:01.594365 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Dec 03 09:30:03 crc kubenswrapper[4856]: E1203 09:30:03.805993 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 03 09:30:03 crc kubenswrapper[4856]: E1203 09:30:03.806663 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zkj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2dnjq_openstack(17fa448b-f085-4377-a7d2-a4e078ae00c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:30:03 crc kubenswrapper[4856]: E1203 09:30:03.807911 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2dnjq" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" Dec 03 09:30:04 crc kubenswrapper[4856]: E1203 09:30:04.578255 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2dnjq" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" Dec 03 09:30:06 crc kubenswrapper[4856]: I1203 09:30:06.594540 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.527516 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660597 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660628 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt7p6\" (UniqueName: \"kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660748 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.660824 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys\") pod \"6c624d83-5d35-4f56-a482-1a6c3f422f19\" (UID: \"6c624d83-5d35-4f56-a482-1a6c3f422f19\") " Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.669835 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.672285 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6" (OuterVolumeSpecName: "kube-api-access-lt7p6") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "kube-api-access-lt7p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.676433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts" (OuterVolumeSpecName: "scripts") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.690270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.721433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.724268 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data" (OuterVolumeSpecName: "config-data") pod "6c624d83-5d35-4f56-a482-1a6c3f422f19" (UID: "6c624d83-5d35-4f56-a482-1a6c3f422f19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.734560 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wttb4" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765478 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765882 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765892 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765902 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt7p6\" (UniqueName: \"kubernetes.io/projected/6c624d83-5d35-4f56-a482-1a6c3f422f19-kube-api-access-lt7p6\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765911 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.765919 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c624d83-5d35-4f56-a482-1a6c3f422f19-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.786446 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wttb4" event={"ID":"6c624d83-5d35-4f56-a482-1a6c3f422f19","Type":"ContainerDied","Data":"f203ff6d7fc4c86e7c2d37d51c218ef8c9ec305ac3535847732f0be48a1ccfff"} Dec 03 09:30:10 crc kubenswrapper[4856]: I1203 09:30:10.786520 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f203ff6d7fc4c86e7c2d37d51c218ef8c9ec305ac3535847732f0be48a1ccfff" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.618355 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wttb4"] Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.629448 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wttb4"] Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.740420 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k5kxs"] Dec 03 09:30:11 crc kubenswrapper[4856]: E1203 09:30:11.740950 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c624d83-5d35-4f56-a482-1a6c3f422f19" containerName="keystone-bootstrap" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.740968 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c624d83-5d35-4f56-a482-1a6c3f422f19" containerName="keystone-bootstrap" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.741185 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c624d83-5d35-4f56-a482-1a6c3f422f19" containerName="keystone-bootstrap" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.741945 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.753213 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k5kxs"] Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.801745 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.801909 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.802126 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.802222 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.802447 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k95wr" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.906640 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.907229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.907315 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.907460 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.907697 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87mg\" (UniqueName: \"kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:11 crc kubenswrapper[4856]: I1203 09:30:11.907863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010254 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87mg\" (UniqueName: \"kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.010488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.018525 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.018550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.019649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.020432 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.026592 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.067488 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87mg\" (UniqueName: \"kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg\") pod \"keystone-bootstrap-k5kxs\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.124237 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:12 crc kubenswrapper[4856]: I1203 09:30:12.724177 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c624d83-5d35-4f56-a482-1a6c3f422f19" path="/var/lib/kubelet/pods/6c624d83-5d35-4f56-a482-1a6c3f422f19/volumes" Dec 03 09:30:16 crc kubenswrapper[4856]: I1203 09:30:16.595029 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 09:30:16 crc kubenswrapper[4856]: I1203 09:30:16.595835 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:30:20 crc kubenswrapper[4856]: E1203 09:30:20.282638 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 03 09:30:20 crc kubenswrapper[4856]: E1203 09:30:20.282850 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n544h555h8h667h57h65h59dh686h548hd9h5cdh64bh99hb8h5ffh669h5bdh698h86h54fh5d8hcbh5b7h5c5h55dh658h588h68bh577h645h54dh5d9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d683d201-f027-4492-966c-95fa0e5004cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:30:20 crc kubenswrapper[4856]: E1203 09:30:20.858729 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 03 09:30:20 crc kubenswrapper[4856]: E1203 09:30:20.858946 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2shnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-l9lg2_openstack(a3f84948-d98d-443d-9055-b4f4d28369b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:30:20 crc kubenswrapper[4856]: E1203 09:30:20.860168 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-l9lg2" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.051389 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.068776 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb\") pod \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.068832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config\") pod \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.068933 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb\") pod \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.068975 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc\") pod \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.069002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp2n6\" (UniqueName: \"kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6\") pod \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\" (UID: \"996c5c67-0f32-4d87-b31b-35e36f3d7b67\") " Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.095855 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6" (OuterVolumeSpecName: "kube-api-access-fp2n6") pod "996c5c67-0f32-4d87-b31b-35e36f3d7b67" (UID: "996c5c67-0f32-4d87-b31b-35e36f3d7b67"). InnerVolumeSpecName "kube-api-access-fp2n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.143701 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "996c5c67-0f32-4d87-b31b-35e36f3d7b67" (UID: "996c5c67-0f32-4d87-b31b-35e36f3d7b67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.151615 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "996c5c67-0f32-4d87-b31b-35e36f3d7b67" (UID: "996c5c67-0f32-4d87-b31b-35e36f3d7b67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.166088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "996c5c67-0f32-4d87-b31b-35e36f3d7b67" (UID: "996c5c67-0f32-4d87-b31b-35e36f3d7b67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.171919 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp2n6\" (UniqueName: \"kubernetes.io/projected/996c5c67-0f32-4d87-b31b-35e36f3d7b67-kube-api-access-fp2n6\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.171973 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.171989 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.172003 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.187775 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config" (OuterVolumeSpecName: "config") pod "996c5c67-0f32-4d87-b31b-35e36f3d7b67" (UID: "996c5c67-0f32-4d87-b31b-35e36f3d7b67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.273553 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996c5c67-0f32-4d87-b31b-35e36f3d7b67-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.596349 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.847116 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" event={"ID":"996c5c67-0f32-4d87-b31b-35e36f3d7b67","Type":"ContainerDied","Data":"abf45dd0592f241a037985932f138b7b219ba7e9f9f1fb2907f8ed7b77967cb1"} Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.847296 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5wd7d" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.852063 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a337aa7-570e-40fe-86ca-faee49a09165" containerID="17538324df8bc7553ce59c1956db2e2d74a61061088255c4a13b078565f90ccb" exitCode=0 Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.852250 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zpbhz" event={"ID":"3a337aa7-570e-40fe-86ca-faee49a09165","Type":"ContainerDied","Data":"17538324df8bc7553ce59c1956db2e2d74a61061088255c4a13b078565f90ccb"} Dec 03 09:30:21 crc kubenswrapper[4856]: E1203 09:30:21.855048 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-l9lg2" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.936770 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:30:21 crc kubenswrapper[4856]: I1203 09:30:21.949202 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5wd7d"] Dec 03 09:30:22 crc kubenswrapper[4856]: I1203 09:30:22.563924 4856 scope.go:117] "RemoveContainer" containerID="2662eb87672045bc0806594455099673e165c6f40f3019a3806f18d45b8366fa" Dec 03 09:30:22 crc kubenswrapper[4856]: E1203 09:30:22.584505 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 03 09:30:22 crc kubenswrapper[4856]: E1203 09:30:22.584739 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j574,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jpsww_openstack(74e3fff2-b7c3-4cd4-b62e-83af7da6e87b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:30:22 crc kubenswrapper[4856]: E1203 09:30:22.586641 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jpsww" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" Dec 03 09:30:22 crc kubenswrapper[4856]: I1203 09:30:22.699172 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" path="/var/lib/kubelet/pods/996c5c67-0f32-4d87-b31b-35e36f3d7b67/volumes" Dec 03 09:30:23 crc kubenswrapper[4856]: E1203 09:30:23.009649 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jpsww" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.049437 4856 scope.go:117] "RemoveContainer" containerID="b84a888297f5d9ba6e714b848fcfdac9a4cebe65c4d0f4d263441d1a4b08fdc4" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.706431 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zpbhz" Dec 03 09:30:23 crc kubenswrapper[4856]: W1203 09:30:23.711610 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3ced31_90f7_4932_999e_49e914166624.slice/crio-e7cc637c3841de0280daee4d433af77e8244bf1536b8ef0c7c4e7223b1a924ac WatchSource:0}: Error finding container e7cc637c3841de0280daee4d433af77e8244bf1536b8ef0c7c4e7223b1a924ac: Status 404 returned error can't find the container with id e7cc637c3841de0280daee4d433af77e8244bf1536b8ef0c7c4e7223b1a924ac Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.722485 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.732340 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.753530 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-569f95b-qhsts"] Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.764971 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k5kxs"] Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.771222 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data\") pod \"3a337aa7-570e-40fe-86ca-faee49a09165\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.771431 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vdvw\" (UniqueName: \"kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw\") pod \"3a337aa7-570e-40fe-86ca-faee49a09165\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.771527 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data\") pod \"3a337aa7-570e-40fe-86ca-faee49a09165\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.771609 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle\") pod \"3a337aa7-570e-40fe-86ca-faee49a09165\" (UID: \"3a337aa7-570e-40fe-86ca-faee49a09165\") " Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.779971 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw" (OuterVolumeSpecName: "kube-api-access-2vdvw") pod "3a337aa7-570e-40fe-86ca-faee49a09165" (UID: "3a337aa7-570e-40fe-86ca-faee49a09165"). InnerVolumeSpecName "kube-api-access-2vdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.782350 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8"] Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.816480 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a337aa7-570e-40fe-86ca-faee49a09165" (UID: "3a337aa7-570e-40fe-86ca-faee49a09165"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.873455 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vdvw\" (UniqueName: \"kubernetes.io/projected/3a337aa7-570e-40fe-86ca-faee49a09165-kube-api-access-2vdvw\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.873483 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.959350 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a337aa7-570e-40fe-86ca-faee49a09165" (UID: "3a337aa7-570e-40fe-86ca-faee49a09165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.961416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data" (OuterVolumeSpecName: "config-data") pod "3a337aa7-570e-40fe-86ca-faee49a09165" (UID: "3a337aa7-570e-40fe-86ca-faee49a09165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.971502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerStarted","Data":"4dc1949bf505413846a5690e208c31f7984872aa4e454740225ba7b1c1d2778c"} Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.979798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerStarted","Data":"d4bc1e99b4ae36a2bc75192793bc4e40a9dc8df477025847c42f2f6fa86d4508"} Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.980024 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-764b5497d9-dcts9" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon-log" containerID="cri-o://d4bc1e99b4ae36a2bc75192793bc4e40a9dc8df477025847c42f2f6fa86d4508" gracePeriod=30 Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.980736 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-764b5497d9-dcts9" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon" containerID="cri-o://fc0bd12469bc5033edc430b4585c91f3665258ecf2d573ea9d107523331f910b" gracePeriod=30 Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.991953 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dnjq" event={"ID":"17fa448b-f085-4377-a7d2-a4e078ae00c3","Type":"ContainerStarted","Data":"700b4bc7724023764e2ed91f12e99f5a84ee1952b947fa06b5e647b117fbf740"} Dec 03 09:30:23 crc kubenswrapper[4856]: I1203 09:30:23.993979 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" event={"ID":"11af8d9b-d6bc-43a5-9a08-cd946ac9acac","Type":"ContainerStarted","Data":"9209b86025d49777d465b098d3b506e0328b21d07b4c3381571fc99422cef644"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.000042 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.000162 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a337aa7-570e-40fe-86ca-faee49a09165-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.009850 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zpbhz" event={"ID":"3a337aa7-570e-40fe-86ca-faee49a09165","Type":"ContainerDied","Data":"052de2a5903b1f6a56d298523ecef1f2d8df1d5322ed78ad764d2f4baa23e561"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.014330 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052de2a5903b1f6a56d298523ecef1f2d8df1d5322ed78ad764d2f4baa23e561" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.014007 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zpbhz" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.017782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerStarted","Data":"1e4147583c61c530af14c175e04cda874e752fd546e942b5665f2daeecd3e1fc"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.025403 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5kxs" event={"ID":"318afd46-7100-422f-983d-0a9c87cc38c6","Type":"ContainerStarted","Data":"b93e6277d2d6ed94fa7d5808f8a0dae5226632e11c78e66178d4fa0352d3a102"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.025967 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-764b5497d9-dcts9" podStartSLOduration=5.643311331 podStartE2EDuration="39.025946044s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:47.476482939 +0000 UTC m=+1055.659375240" lastFinishedPulling="2025-12-03 09:30:20.859117652 +0000 UTC m=+1089.042009953" observedRunningTime="2025-12-03 09:30:24.009213298 +0000 UTC m=+1092.192105599" watchObservedRunningTime="2025-12-03 09:30:24.025946044 +0000 UTC m=+1092.208838345" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.033316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.036969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569f95b-qhsts" event={"ID":"7a3ced31-90f7-4932-999e-49e914166624","Type":"ContainerStarted","Data":"e7cc637c3841de0280daee4d433af77e8244bf1536b8ef0c7c4e7223b1a924ac"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.043855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerStarted","Data":"2e6d89843b66128e4d4e9a51432fad7cf0620a0f6bd20e2e3a576480d4146fa9"} Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.044154 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2dnjq" podStartSLOduration=3.437693018 podStartE2EDuration="39.044135685s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:47.463601936 +0000 UTC m=+1055.646494237" lastFinishedPulling="2025-12-03 09:30:23.070044603 +0000 UTC m=+1091.252936904" observedRunningTime="2025-12-03 09:30:24.026910047 +0000 UTC m=+1092.209802358" watchObservedRunningTime="2025-12-03 09:30:24.044135685 +0000 UTC m=+1092.227027986" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.446574 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:30:24 crc kubenswrapper[4856]: E1203 09:30:24.447291 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="init" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.447311 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="init" Dec 03 09:30:24 crc kubenswrapper[4856]: E1203 09:30:24.447338 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.447344 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" Dec 03 09:30:24 crc kubenswrapper[4856]: E1203 09:30:24.447354 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" containerName="glance-db-sync" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.447360 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" containerName="glance-db-sync" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.447662 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" containerName="glance-db-sync" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.447678 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="996c5c67-0f32-4d87-b31b-35e36f3d7b67" containerName="dnsmasq-dns" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.449177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.462417 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516841 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516900 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qrq\" (UniqueName: \"kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.516926 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655933 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qrq\" (UniqueName: \"kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.655953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.657419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.658597 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.658936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.661844 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.666109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.713618 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qrq\" (UniqueName: \"kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq\") pod \"dnsmasq-dns-56df8fb6b7-kv9q8\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:24 crc kubenswrapper[4856]: I1203 09:30:24.783902 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.059068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5kxs" event={"ID":"318afd46-7100-422f-983d-0a9c87cc38c6","Type":"ContainerStarted","Data":"c6eda802efe3ea79c12bda1a57d85ecebecb2503601b65c5cfc8e48473010979"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.069186 4856 generic.go:334] "Generic (PLEG): container finished" podID="11af8d9b-d6bc-43a5-9a08-cd946ac9acac" containerID="8ad3bef55691a965283b0d81790846eb98cb16e139bafcd6b48e710524595e8a" exitCode=0 Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.069376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" event={"ID":"11af8d9b-d6bc-43a5-9a08-cd946ac9acac","Type":"ContainerDied","Data":"8ad3bef55691a965283b0d81790846eb98cb16e139bafcd6b48e710524595e8a"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.077945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569f95b-qhsts" event={"ID":"7a3ced31-90f7-4932-999e-49e914166624","Type":"ContainerStarted","Data":"26559a28126be572c8424cda0fd6c73347c982b7f353025008b5e9a606b00ecc"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.093757 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerStarted","Data":"93459fc184cdb63fb1dc5fc7ac130af195be451b4b25efa992d10a4ce944fa52"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.093929 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78fd484f8f-4w5ks" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon-log" containerID="cri-o://2e6d89843b66128e4d4e9a51432fad7cf0620a0f6bd20e2e3a576480d4146fa9" gracePeriod=30 Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.094057 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78fd484f8f-4w5ks" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon" containerID="cri-o://93459fc184cdb63fb1dc5fc7ac130af195be451b4b25efa992d10a4ce944fa52" gracePeriod=30 Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.097954 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k5kxs" podStartSLOduration=14.097914861 podStartE2EDuration="14.097914861s" podCreationTimestamp="2025-12-03 09:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:25.084300791 +0000 UTC m=+1093.267193092" watchObservedRunningTime="2025-12-03 09:30:25.097914861 +0000 UTC m=+1093.280807162" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.102923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerStarted","Data":"8e8d8d0e4f752b03d3faa4fb027bc5872ed5ce85678c10a5a108fa2cb35e40a3"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.112667 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerStarted","Data":"fc0bd12469bc5033edc430b4585c91f3665258ecf2d573ea9d107523331f910b"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.114817 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerStarted","Data":"f7c92fc6193b67cc52cac39374c20437c0be502c91f48db466472c343e7eb68e"} Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.114903 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c6d6478fc-xjpl5" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon-log" containerID="cri-o://1e4147583c61c530af14c175e04cda874e752fd546e942b5665f2daeecd3e1fc" gracePeriod=30 Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.115194 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c6d6478fc-xjpl5" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon" containerID="cri-o://f7c92fc6193b67cc52cac39374c20437c0be502c91f48db466472c343e7eb68e" gracePeriod=30 Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.142473 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78fd484f8f-4w5ks" podStartSLOduration=3.69597108 podStartE2EDuration="36.142450062s" podCreationTimestamp="2025-12-03 09:29:49 +0000 UTC" firstStartedPulling="2025-12-03 09:29:50.607907771 +0000 UTC m=+1058.790800072" lastFinishedPulling="2025-12-03 09:30:23.054386763 +0000 UTC m=+1091.237279054" observedRunningTime="2025-12-03 09:30:25.130669296 +0000 UTC m=+1093.313561597" watchObservedRunningTime="2025-12-03 09:30:25.142450062 +0000 UTC m=+1093.325342363" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.159020 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c6d6478fc-xjpl5" podStartSLOduration=4.607057088 podStartE2EDuration="40.159000883s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:47.265346786 +0000 UTC m=+1055.448239077" lastFinishedPulling="2025-12-03 09:30:22.817290571 +0000 UTC m=+1091.000182872" observedRunningTime="2025-12-03 09:30:25.152455405 +0000 UTC m=+1093.335347716" watchObservedRunningTime="2025-12-03 09:30:25.159000883 +0000 UTC m=+1093.341893184" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.409187 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.418835 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.423824 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.424097 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-glqkm" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.424265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.452230 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.541201 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.543229 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.551335 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.551974 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591277 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591510 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591538 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghm9\" (UniqueName: \"kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591694 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.591716 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702249 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghm9\" (UniqueName: \"kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702483 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702519 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702548 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.702587 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvvjb\" (UniqueName: \"kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703443 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703474 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.703519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.705132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.705692 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.708845 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.717359 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.724060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.724127 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.735782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghm9\" (UniqueName: \"kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.763143 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvvjb\" (UniqueName: \"kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805417 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805534 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805596 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.805697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.806294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.809494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.815335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.817439 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.817578 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.818410 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.852502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.896095 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:30:25 crc kubenswrapper[4856]: I1203 09:30:25.936198 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvvjb\" (UniqueName: \"kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:26 crc kubenswrapper[4856]: I1203 09:30:26.030641 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:26 crc kubenswrapper[4856]: I1203 09:30:26.381371 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:26 crc kubenswrapper[4856]: I1203 09:30:26.554980 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:30:26 crc kubenswrapper[4856]: I1203 09:30:26.583477 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:30:26 crc kubenswrapper[4856]: I1203 09:30:26.950999 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.104406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume\") pod \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.104557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume\") pod \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.104759 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2pk8\" (UniqueName: \"kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8\") pod \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\" (UID: \"11af8d9b-d6bc-43a5-9a08-cd946ac9acac\") " Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.111451 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume" (OuterVolumeSpecName: "config-volume") pod "11af8d9b-d6bc-43a5-9a08-cd946ac9acac" (UID: "11af8d9b-d6bc-43a5-9a08-cd946ac9acac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.112340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8" (OuterVolumeSpecName: "kube-api-access-w2pk8") pod "11af8d9b-d6bc-43a5-9a08-cd946ac9acac" (UID: "11af8d9b-d6bc-43a5-9a08-cd946ac9acac"). InnerVolumeSpecName "kube-api-access-w2pk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.120137 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11af8d9b-d6bc-43a5-9a08-cd946ac9acac" (UID: "11af8d9b-d6bc-43a5-9a08-cd946ac9acac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.207449 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2pk8\" (UniqueName: \"kubernetes.io/projected/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-kube-api-access-w2pk8\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.207490 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.207507 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11af8d9b-d6bc-43a5-9a08-cd946ac9acac-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.228254 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.230979 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" event={"ID":"11af8d9b-d6bc-43a5-9a08-cd946ac9acac","Type":"ContainerDied","Data":"9209b86025d49777d465b098d3b506e0328b21d07b4c3381571fc99422cef644"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.231035 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9209b86025d49777d465b098d3b506e0328b21d07b4c3381571fc99422cef644" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.231136 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.245567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569f95b-qhsts" event={"ID":"7a3ced31-90f7-4932-999e-49e914166624","Type":"ContainerStarted","Data":"db001aa9c38b8e5087766ac9fda98dcd16b72c11ed1cd409485b59fe77f3bd17"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.250423 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerStarted","Data":"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.254625 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerStarted","Data":"279dc7b9f3a417f2f052c9b05e4582beabe488060296a80e641c58f25f734338"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.259039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerStarted","Data":"d0335f91b1ee267925ae355a1f6d718a63693670100435c4fe9b64fbadcdae19"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.259088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerStarted","Data":"06955b4c4b9948a0587778e0c3499b0b02489ee960190ae3aa936923a2677590"} Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.289142 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-569f95b-qhsts" podStartSLOduration=32.289115083 podStartE2EDuration="32.289115083s" podCreationTimestamp="2025-12-03 09:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:27.278126807 +0000 UTC m=+1095.461019108" watchObservedRunningTime="2025-12-03 09:30:27.289115083 +0000 UTC m=+1095.472007384" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.560774 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-777b75cf48-68qq9" podStartSLOduration=32.560752314 podStartE2EDuration="32.560752314s" podCreationTimestamp="2025-12-03 09:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:27.520790724 +0000 UTC m=+1095.703683025" watchObservedRunningTime="2025-12-03 09:30:27.560752314 +0000 UTC m=+1095.743644615" Dec 03 09:30:27 crc kubenswrapper[4856]: I1203 09:30:27.620608 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.352316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerStarted","Data":"51c52996b60d654bcb4cba26af3008ecb7f5c273b130d4b1fff840c788609863"} Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.363732 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerStarted","Data":"4fab0076f95c6ffbb1f43dc341fe7a3e7a463ef4ef32827c31dd1db772085f7e"} Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.367708 4856 generic.go:334] "Generic (PLEG): container finished" podID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerID="d0335f91b1ee267925ae355a1f6d718a63693670100435c4fe9b64fbadcdae19" exitCode=0 Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.367825 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerDied","Data":"d0335f91b1ee267925ae355a1f6d718a63693670100435c4fe9b64fbadcdae19"} Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.817899 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:28 crc kubenswrapper[4856]: I1203 09:30:28.909098 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:29 crc kubenswrapper[4856]: I1203 09:30:29.902031 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:30:32 crc kubenswrapper[4856]: I1203 09:30:32.420815 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerStarted","Data":"9f83e48162abd1ca4a4f27ba238065a009f6cd4c4f4a8b0f6d3aba698d8fd79d"} Dec 03 09:30:33 crc kubenswrapper[4856]: I1203 09:30:33.447156 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerStarted","Data":"f19687cf3d9a53b0a8dedae482c18a78038f58a556473864cad0f3d5d7caf743"} Dec 03 09:30:34 crc kubenswrapper[4856]: I1203 09:30:34.468855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerStarted","Data":"5ae544c6f69bc1cc6871772f869f660717a077be5989d3dff6c80edf8291f55f"} Dec 03 09:30:34 crc kubenswrapper[4856]: I1203 09:30:34.470486 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:34 crc kubenswrapper[4856]: I1203 09:30:34.498488 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" podStartSLOduration=10.498460982 podStartE2EDuration="10.498460982s" podCreationTimestamp="2025-12-03 09:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:34.493294236 +0000 UTC m=+1102.676186537" watchObservedRunningTime="2025-12-03 09:30:34.498460982 +0000 UTC m=+1102.681353283" Dec 03 09:30:35 crc kubenswrapper[4856]: I1203 09:30:35.629418 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:30:35 crc kubenswrapper[4856]: I1203 09:30:35.634161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:30:35 crc kubenswrapper[4856]: I1203 09:30:35.669751 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:30:35 crc kubenswrapper[4856]: I1203 09:30:35.670808 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.520802 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerStarted","Data":"c8ea62dc28799cf8a57ab49320722757e7daa1b484fab63adc64749d2e3de06c"} Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.521659 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-log" containerID="cri-o://9f83e48162abd1ca4a4f27ba238065a009f6cd4c4f4a8b0f6d3aba698d8fd79d" gracePeriod=30 Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.523679 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-httpd" containerID="cri-o://c8ea62dc28799cf8a57ab49320722757e7daa1b484fab63adc64749d2e3de06c" gracePeriod=30 Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.538344 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-log" containerID="cri-o://f19687cf3d9a53b0a8dedae482c18a78038f58a556473864cad0f3d5d7caf743" gracePeriod=30 Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.538485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerStarted","Data":"bf9f916eab5ebbe6a6d046f935d757b39d465d2b2915aa51e82f4fd54310d219"} Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.538565 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-httpd" containerID="cri-o://bf9f916eab5ebbe6a6d046f935d757b39d465d2b2915aa51e82f4fd54310d219" gracePeriod=30 Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.566740 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.566711969 podStartE2EDuration="12.566711969s" podCreationTimestamp="2025-12-03 09:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:36.563714056 +0000 UTC m=+1104.746606367" watchObservedRunningTime="2025-12-03 09:30:36.566711969 +0000 UTC m=+1104.749604270" Dec 03 09:30:36 crc kubenswrapper[4856]: I1203 09:30:36.619550 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.61952432 podStartE2EDuration="12.61952432s" podCreationTimestamp="2025-12-03 09:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:36.600475198 +0000 UTC m=+1104.783367509" watchObservedRunningTime="2025-12-03 09:30:36.61952432 +0000 UTC m=+1104.802416631" Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.558232 4856 generic.go:334] "Generic (PLEG): container finished" podID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerID="c8ea62dc28799cf8a57ab49320722757e7daa1b484fab63adc64749d2e3de06c" exitCode=0 Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.558275 4856 generic.go:334] "Generic (PLEG): container finished" podID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerID="9f83e48162abd1ca4a4f27ba238065a009f6cd4c4f4a8b0f6d3aba698d8fd79d" exitCode=143 Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.558350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerDied","Data":"c8ea62dc28799cf8a57ab49320722757e7daa1b484fab63adc64749d2e3de06c"} Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.558435 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerDied","Data":"9f83e48162abd1ca4a4f27ba238065a009f6cd4c4f4a8b0f6d3aba698d8fd79d"} Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.562597 4856 generic.go:334] "Generic (PLEG): container finished" podID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerID="bf9f916eab5ebbe6a6d046f935d757b39d465d2b2915aa51e82f4fd54310d219" exitCode=0 Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.562627 4856 generic.go:334] "Generic (PLEG): container finished" podID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerID="f19687cf3d9a53b0a8dedae482c18a78038f58a556473864cad0f3d5d7caf743" exitCode=143 Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.562648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerDied","Data":"bf9f916eab5ebbe6a6d046f935d757b39d465d2b2915aa51e82f4fd54310d219"} Dec 03 09:30:37 crc kubenswrapper[4856]: I1203 09:30:37.562671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerDied","Data":"f19687cf3d9a53b0a8dedae482c18a78038f58a556473864cad0f3d5d7caf743"} Dec 03 09:30:39 crc kubenswrapper[4856]: I1203 09:30:39.786190 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:30:39 crc kubenswrapper[4856]: I1203 09:30:39.883277 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:30:39 crc kubenswrapper[4856]: I1203 09:30:39.884174 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" containerID="cri-o://3ec631465d7275912bc9cf2c9740db5f9372aaa229307f64206922774d1c0820" gracePeriod=10 Dec 03 09:30:40 crc kubenswrapper[4856]: I1203 09:30:40.615092 4856 generic.go:334] "Generic (PLEG): container finished" podID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerID="3ec631465d7275912bc9cf2c9740db5f9372aaa229307f64206922774d1c0820" exitCode=0 Dec 03 09:30:40 crc kubenswrapper[4856]: I1203 09:30:40.615140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerDied","Data":"3ec631465d7275912bc9cf2c9740db5f9372aaa229307f64206922774d1c0820"} Dec 03 09:30:41 crc kubenswrapper[4856]: I1203 09:30:41.594430 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 03 09:30:45 crc kubenswrapper[4856]: I1203 09:30:45.630659 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 03 09:30:45 crc kubenswrapper[4856]: I1203 09:30:45.670860 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-569f95b-qhsts" podUID="7a3ced31-90f7-4932-999e-49e914166624" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.593665 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.896454 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929495 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929538 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghm9\" (UniqueName: \"kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929707 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929764 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.929786 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs\") pod \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\" (UID: \"0c9e232d-bd23-409c-bad7-3afa8197e1ee\") " Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.930496 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs" (OuterVolumeSpecName: "logs") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.933517 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.937419 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.939478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts" (OuterVolumeSpecName: "scripts") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.941105 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9" (OuterVolumeSpecName: "kube-api-access-zghm9") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "kube-api-access-zghm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:46 crc kubenswrapper[4856]: I1203 09:30:46.966998 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.016972 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data" (OuterVolumeSpecName: "config-data") pod "0c9e232d-bd23-409c-bad7-3afa8197e1ee" (UID: "0c9e232d-bd23-409c-bad7-3afa8197e1ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031636 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031668 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031677 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031689 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031702 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghm9\" (UniqueName: \"kubernetes.io/projected/0c9e232d-bd23-409c-bad7-3afa8197e1ee-kube-api-access-zghm9\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031711 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c9e232d-bd23-409c-bad7-3afa8197e1ee-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.031719 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9e232d-bd23-409c-bad7-3afa8197e1ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.051377 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.058203 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132337 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132707 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptf7n\" (UniqueName: \"kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132775 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132830 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.132923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb\") pod \"9fdad791-ab4e-49cb-acc9-f49240405f83\" (UID: \"9fdad791-ab4e-49cb-acc9-f49240405f83\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.133374 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.143121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n" (OuterVolumeSpecName: "kube-api-access-ptf7n") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "kube-api-access-ptf7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.219508 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.287037 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.287062 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptf7n\" (UniqueName: \"kubernetes.io/projected/9fdad791-ab4e-49cb-acc9-f49240405f83-kube-api-access-ptf7n\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.350505 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.350841 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.362851 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.388349 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config" (OuterVolumeSpecName: "config") pod "9fdad791-ab4e-49cb-acc9-f49240405f83" (UID: "9fdad791-ab4e-49cb-acc9-f49240405f83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.393927 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.394639 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.394657 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.394673 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fdad791-ab4e-49cb-acc9-f49240405f83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.665249 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.714784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1ac53f-8ebc-477e-951a-c352f5190dfa","Type":"ContainerDied","Data":"51c52996b60d654bcb4cba26af3008ecb7f5c273b130d4b1fff840c788609863"} Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.714865 4856 scope.go:117] "RemoveContainer" containerID="c8ea62dc28799cf8a57ab49320722757e7daa1b484fab63adc64749d2e3de06c" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.715005 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.726490 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c9e232d-bd23-409c-bad7-3afa8197e1ee","Type":"ContainerDied","Data":"4fab0076f95c6ffbb1f43dc341fe7a3e7a463ef4ef32827c31dd1db772085f7e"} Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.726604 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.745565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9lg2" event={"ID":"a3f84948-d98d-443d-9055-b4f4d28369b4","Type":"ContainerStarted","Data":"2a064e72a881dcc83802686f8d296d22eac5f29ee3afa83933593ff5378ac8ad"} Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.766002 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" event={"ID":"9fdad791-ab4e-49cb-acc9-f49240405f83","Type":"ContainerDied","Data":"625432ff85352cf04a2948858416bda64a118628889880f277ab380ee62eb21f"} Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.766126 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-g7ssp" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.785334 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l9lg2" podStartSLOduration=3.050880705 podStartE2EDuration="1m2.785314197s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:47.448385797 +0000 UTC m=+1055.631278098" lastFinishedPulling="2025-12-03 09:30:47.182819289 +0000 UTC m=+1115.365711590" observedRunningTime="2025-12-03 09:30:47.765417554 +0000 UTC m=+1115.948309855" watchObservedRunningTime="2025-12-03 09:30:47.785314197 +0000 UTC m=+1115.968206498" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.801526 4856 scope.go:117] "RemoveContainer" containerID="9f83e48162abd1ca4a4f27ba238065a009f6cd4c4f4a8b0f6d3aba698d8fd79d" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803576 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803652 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803683 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvvjb\" (UniqueName: \"kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803705 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803759 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.803799 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"de1ac53f-8ebc-477e-951a-c352f5190dfa\" (UID: \"de1ac53f-8ebc-477e-951a-c352f5190dfa\") " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.804710 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs" (OuterVolumeSpecName: "logs") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.808192 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.818430 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb" (OuterVolumeSpecName: "kube-api-access-bvvjb") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "kube-api-access-bvvjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.820525 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.820651 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.847085 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts" (OuterVolumeSpecName: "scripts") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.855309 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875090 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875736 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875755 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875768 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875776 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875795 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875801 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875844 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11af8d9b-d6bc-43a5-9a08-cd946ac9acac" containerName="collect-profiles" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875853 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="11af8d9b-d6bc-43a5-9a08-cd946ac9acac" containerName="collect-profiles" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875889 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875896 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875907 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="init" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875913 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="init" Dec 03 09:30:47 crc kubenswrapper[4856]: E1203 09:30:47.875929 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.875934 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876137 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876157 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="11af8d9b-d6bc-43a5-9a08-cd946ac9acac" containerName="collect-profiles" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876164 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" containerName="dnsmasq-dns" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876177 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-log" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876185 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.876196 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" containerName="glance-httpd" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.877452 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.882905 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.883337 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.899089 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.899100 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908505 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvvjb\" (UniqueName: \"kubernetes.io/projected/de1ac53f-8ebc-477e-951a-c352f5190dfa-kube-api-access-bvvjb\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908563 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908574 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908607 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908618 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.908629 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1ac53f-8ebc-477e-951a-c352f5190dfa-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.924246 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data" (OuterVolumeSpecName: "config-data") pod "de1ac53f-8ebc-477e-951a-c352f5190dfa" (UID: "de1ac53f-8ebc-477e-951a-c352f5190dfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.933450 4856 scope.go:117] "RemoveContainer" containerID="bf9f916eab5ebbe6a6d046f935d757b39d465d2b2915aa51e82f4fd54310d219" Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.933727 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-g7ssp"] Dec 03 09:30:47 crc kubenswrapper[4856]: I1203 09:30:47.981675 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.001602 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019121 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019315 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019354 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7q7\" (UniqueName: \"kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019515 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1ac53f-8ebc-477e-951a-c352f5190dfa-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.019532 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.054894 4856 scope.go:117] "RemoveContainer" containerID="f19687cf3d9a53b0a8dedae482c18a78038f58a556473864cad0f3d5d7caf743" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.092051 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.099769 4856 scope.go:117] "RemoveContainer" containerID="3ec631465d7275912bc9cf2c9740db5f9372aaa229307f64206922774d1c0820" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.116873 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.126710 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.128839 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.128920 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.128956 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.128975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.129012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.129032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7q7\" (UniqueName: \"kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.129070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.129103 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.129659 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.132329 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.137008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.138713 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.140260 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.140630 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.145028 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.146718 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.148485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.175205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7q7\" (UniqueName: \"kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.177105 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.180168 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.207960 4856 scope.go:117] "RemoveContainer" containerID="8e8635cfe69ccdaac967df1b2bbfb45b043611922258d66faa9a47de1df1c227" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.231985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232125 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232209 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232242 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.232274 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmbx\" (UniqueName: \"kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.255632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334749 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334891 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334948 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.334974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.335015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.335047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmbx\" (UniqueName: \"kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.336233 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.336501 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.336266 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.342296 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.343340 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.344889 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.354649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmbx\" (UniqueName: \"kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.363632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.379246 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.471282 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.531488 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.715345 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9e232d-bd23-409c-bad7-3afa8197e1ee" path="/var/lib/kubelet/pods/0c9e232d-bd23-409c-bad7-3afa8197e1ee/volumes" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.716622 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fdad791-ab4e-49cb-acc9-f49240405f83" path="/var/lib/kubelet/pods/9fdad791-ab4e-49cb-acc9-f49240405f83/volumes" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.717352 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1ac53f-8ebc-477e-951a-c352f5190dfa" path="/var/lib/kubelet/pods/de1ac53f-8ebc-477e-951a-c352f5190dfa/volumes" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.797688 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jpsww" event={"ID":"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b","Type":"ContainerStarted","Data":"6c417a1d3c1e90d9d598e43f026833def2a0ca935265a2f958419f54a4dd891a"} Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.834055 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jpsww" podStartSLOduration=3.894371829 podStartE2EDuration="1m3.8340358s" podCreationTimestamp="2025-12-03 09:29:45 +0000 UTC" firstStartedPulling="2025-12-03 09:29:47.241281503 +0000 UTC m=+1055.424173804" lastFinishedPulling="2025-12-03 09:30:47.180945474 +0000 UTC m=+1115.363837775" observedRunningTime="2025-12-03 09:30:48.825086733 +0000 UTC m=+1117.007979034" watchObservedRunningTime="2025-12-03 09:30:48.8340358 +0000 UTC m=+1117.016928101" Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.841480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerStarted","Data":"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10"} Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.858960 4856 generic.go:334] "Generic (PLEG): container finished" podID="318afd46-7100-422f-983d-0a9c87cc38c6" containerID="c6eda802efe3ea79c12bda1a57d85ecebecb2503601b65c5cfc8e48473010979" exitCode=0 Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.859038 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5kxs" event={"ID":"318afd46-7100-422f-983d-0a9c87cc38c6","Type":"ContainerDied","Data":"c6eda802efe3ea79c12bda1a57d85ecebecb2503601b65c5cfc8e48473010979"} Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.867434 4856 generic.go:334] "Generic (PLEG): container finished" podID="17fa448b-f085-4377-a7d2-a4e078ae00c3" containerID="700b4bc7724023764e2ed91f12e99f5a84ee1952b947fa06b5e647b117fbf740" exitCode=0 Dec 03 09:30:48 crc kubenswrapper[4856]: I1203 09:30:48.867507 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dnjq" event={"ID":"17fa448b-f085-4377-a7d2-a4e078ae00c3","Type":"ContainerDied","Data":"700b4bc7724023764e2ed91f12e99f5a84ee1952b947fa06b5e647b117fbf740"} Dec 03 09:30:49 crc kubenswrapper[4856]: I1203 09:30:49.011799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:30:49 crc kubenswrapper[4856]: W1203 09:30:49.024741 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb3b31a_4225_44ba_a8c8_41f6a46fb6c3.slice/crio-fa11ac341f052b9f97207e1a9e81d45f6e16407d4519ff67e7cdbaf6bc5a9a0e WatchSource:0}: Error finding container fa11ac341f052b9f97207e1a9e81d45f6e16407d4519ff67e7cdbaf6bc5a9a0e: Status 404 returned error can't find the container with id fa11ac341f052b9f97207e1a9e81d45f6e16407d4519ff67e7cdbaf6bc5a9a0e Dec 03 09:30:49 crc kubenswrapper[4856]: I1203 09:30:49.406627 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:30:49 crc kubenswrapper[4856]: I1203 09:30:49.962295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerStarted","Data":"4e9f288999a0d606f8b85bd4fc461dab6e798eb2e8919a65f01ca9087d96aba4"} Dec 03 09:30:49 crc kubenswrapper[4856]: I1203 09:30:49.969174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerStarted","Data":"f234e2541e2d33cc19aea176a313b04f5fa6e6eb6d1b6b538a8a56bad08b2dcc"} Dec 03 09:30:49 crc kubenswrapper[4856]: I1203 09:30:49.969249 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerStarted","Data":"fa11ac341f052b9f97207e1a9e81d45f6e16407d4519ff67e7cdbaf6bc5a9a0e"} Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.638455 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.649618 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dnjq" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703379 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703488 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs\") pod \"17fa448b-f085-4377-a7d2-a4e078ae00c3\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703513 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703539 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87mg\" (UniqueName: \"kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703587 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zkj2\" (UniqueName: \"kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2\") pod \"17fa448b-f085-4377-a7d2-a4e078ae00c3\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle\") pod \"17fa448b-f085-4377-a7d2-a4e078ae00c3\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703824 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data\") pod \"17fa448b-f085-4377-a7d2-a4e078ae00c3\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703868 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703887 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts\") pod \"17fa448b-f085-4377-a7d2-a4e078ae00c3\" (UID: \"17fa448b-f085-4377-a7d2-a4e078ae00c3\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.703989 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys\") pod \"318afd46-7100-422f-983d-0a9c87cc38c6\" (UID: \"318afd46-7100-422f-983d-0a9c87cc38c6\") " Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.715407 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs" (OuterVolumeSpecName: "logs") pod "17fa448b-f085-4377-a7d2-a4e078ae00c3" (UID: "17fa448b-f085-4377-a7d2-a4e078ae00c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.718376 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.726795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts" (OuterVolumeSpecName: "scripts") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.729772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg" (OuterVolumeSpecName: "kube-api-access-t87mg") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "kube-api-access-t87mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.740219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.747576 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2" (OuterVolumeSpecName: "kube-api-access-6zkj2") pod "17fa448b-f085-4377-a7d2-a4e078ae00c3" (UID: "17fa448b-f085-4377-a7d2-a4e078ae00c3"). InnerVolumeSpecName "kube-api-access-6zkj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.778083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts" (OuterVolumeSpecName: "scripts") pod "17fa448b-f085-4377-a7d2-a4e078ae00c3" (UID: "17fa448b-f085-4377-a7d2-a4e078ae00c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.788097 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data" (OuterVolumeSpecName: "config-data") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.798171 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17fa448b-f085-4377-a7d2-a4e078ae00c3" (UID: "17fa448b-f085-4377-a7d2-a4e078ae00c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807766 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807851 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807867 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807883 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807903 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fa448b-f085-4377-a7d2-a4e078ae00c3-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807922 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t87mg\" (UniqueName: \"kubernetes.io/projected/318afd46-7100-422f-983d-0a9c87cc38c6-kube-api-access-t87mg\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807937 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zkj2\" (UniqueName: \"kubernetes.io/projected/17fa448b-f085-4377-a7d2-a4e078ae00c3-kube-api-access-6zkj2\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807952 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.807966 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.937959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data" (OuterVolumeSpecName: "config-data") pod "17fa448b-f085-4377-a7d2-a4e078ae00c3" (UID: "17fa448b-f085-4377-a7d2-a4e078ae00c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.954171 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fa448b-f085-4377-a7d2-a4e078ae00c3-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:50 crc kubenswrapper[4856]: I1203 09:30:50.978124 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "318afd46-7100-422f-983d-0a9c87cc38c6" (UID: "318afd46-7100-422f-983d-0a9c87cc38c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.058882 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318afd46-7100-422f-983d-0a9c87cc38c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.104755 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-866db7fbbf-khsgj"] Dec 03 09:30:51 crc kubenswrapper[4856]: E1203 09:30:51.105924 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" containerName="placement-db-sync" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.106154 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" containerName="placement-db-sync" Dec 03 09:30:51 crc kubenswrapper[4856]: E1203 09:30:51.106284 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318afd46-7100-422f-983d-0a9c87cc38c6" containerName="keystone-bootstrap" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.106364 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="318afd46-7100-422f-983d-0a9c87cc38c6" containerName="keystone-bootstrap" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.106735 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" containerName="placement-db-sync" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.106954 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="318afd46-7100-422f-983d-0a9c87cc38c6" containerName="keystone-bootstrap" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.110513 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.121210 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.121549 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.126753 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-866db7fbbf-khsgj"] Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.128683 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5kxs" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.129222 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5kxs" event={"ID":"318afd46-7100-422f-983d-0a9c87cc38c6","Type":"ContainerDied","Data":"b93e6277d2d6ed94fa7d5808f8a0dae5226632e11c78e66178d4fa0352d3a102"} Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.129295 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93e6277d2d6ed94fa7d5808f8a0dae5226632e11c78e66178d4fa0352d3a102" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161311 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-public-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161375 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-config-data\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161402 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-fernet-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161429 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-combined-ca-bundle\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161498 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-internal-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161546 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-scripts\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161566 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-credential-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.161587 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhrn\" (UniqueName: \"kubernetes.io/projected/a52f4628-4166-45bf-893f-98155011723d-kube-api-access-hlhrn\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.169954 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-549d5987fb-kphsk"] Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.180056 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.184672 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.184908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dnjq" event={"ID":"17fa448b-f085-4377-a7d2-a4e078ae00c3","Type":"ContainerDied","Data":"cb3aacb4d72c88735ba2c13c428561eaf0f2a073c23c583a627c341a2b7eaf12"} Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.184958 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb3aacb4d72c88735ba2c13c428561eaf0f2a073c23c583a627c341a2b7eaf12" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.185091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.185225 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dnjq" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.189749 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerStarted","Data":"f891e6da20b5529fa7af17674aa6ff33ba243d96d6d0cbc4b41365b420825fe8"} Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.214909 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-549d5987fb-kphsk"] Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263402 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-public-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-config-data\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263494 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-fernet-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263538 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-combined-ca-bundle\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-internal-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263855 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-scripts\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-credential-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.263911 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhrn\" (UniqueName: \"kubernetes.io/projected/a52f4628-4166-45bf-893f-98155011723d-kube-api-access-hlhrn\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.270242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-config-data\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.273490 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-internal-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.273672 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-public-tls-certs\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.281862 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-fernet-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.283965 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-scripts\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.295933 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-credential-keys\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.312128 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52f4628-4166-45bf-893f-98155011723d-combined-ca-bundle\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.323615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhrn\" (UniqueName: \"kubernetes.io/projected/a52f4628-4166-45bf-893f-98155011723d-kube-api-access-hlhrn\") pod \"keystone-866db7fbbf-khsgj\" (UID: \"a52f4628-4166-45bf-893f-98155011723d\") " pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.366933 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-public-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.366992 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-combined-ca-bundle\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.367046 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-internal-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.367072 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g974r\" (UniqueName: \"kubernetes.io/projected/f92cb955-92c8-46d4-adbf-f8de7330cd2c-kube-api-access-g974r\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.367149 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92cb955-92c8-46d4-adbf-f8de7330cd2c-logs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.367168 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-scripts\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.367194 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-config-data\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-scripts\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92cb955-92c8-46d4-adbf-f8de7330cd2c-logs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-config-data\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-public-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-combined-ca-bundle\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469486 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-internal-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.469509 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g974r\" (UniqueName: \"kubernetes.io/projected/f92cb955-92c8-46d4-adbf-f8de7330cd2c-kube-api-access-g974r\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.472486 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f92cb955-92c8-46d4-adbf-f8de7330cd2c-logs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.475508 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-public-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.482861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-scripts\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.483951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-internal-tls-certs\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.488456 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-combined-ca-bundle\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.489070 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92cb955-92c8-46d4-adbf-f8de7330cd2c-config-data\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:51 crc kubenswrapper[4856]: I1203 09:30:51.493307 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g974r\" (UniqueName: \"kubernetes.io/projected/f92cb955-92c8-46d4-adbf-f8de7330cd2c-kube-api-access-g974r\") pod \"placement-549d5987fb-kphsk\" (UID: \"f92cb955-92c8-46d4-adbf-f8de7330cd2c\") " pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:52 crc kubenswrapper[4856]: I1203 09:30:51.929985 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:52 crc kubenswrapper[4856]: I1203 09:30:51.934224 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.041058 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-549d5987fb-kphsk"] Dec 03 09:30:53 crc kubenswrapper[4856]: W1203 09:30:53.044705 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92cb955_92c8_46d4_adbf_f8de7330cd2c.slice/crio-d42de85f88c0b7f185b66d82bf31376cd8bca1a9131cc9c6167928ca3302db12 WatchSource:0}: Error finding container d42de85f88c0b7f185b66d82bf31376cd8bca1a9131cc9c6167928ca3302db12: Status 404 returned error can't find the container with id d42de85f88c0b7f185b66d82bf31376cd8bca1a9131cc9c6167928ca3302db12 Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.169363 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-866db7fbbf-khsgj"] Dec 03 09:30:53 crc kubenswrapper[4856]: W1203 09:30:53.198229 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda52f4628_4166_45bf_893f_98155011723d.slice/crio-b9151b99cb5474cf42ec857f5a1519a1360499d0e652a057681868ae5b0ee618 WatchSource:0}: Error finding container b9151b99cb5474cf42ec857f5a1519a1360499d0e652a057681868ae5b0ee618: Status 404 returned error can't find the container with id b9151b99cb5474cf42ec857f5a1519a1360499d0e652a057681868ae5b0ee618 Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.494640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerStarted","Data":"bb969a2b5253a869cf863f56934a31d538d953f54401127ef10cf3677e4e7498"} Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.503655 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-549d5987fb-kphsk" event={"ID":"f92cb955-92c8-46d4-adbf-f8de7330cd2c","Type":"ContainerStarted","Data":"d42de85f88c0b7f185b66d82bf31376cd8bca1a9131cc9c6167928ca3302db12"} Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.504912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-866db7fbbf-khsgj" event={"ID":"a52f4628-4166-45bf-893f-98155011723d","Type":"ContainerStarted","Data":"b9151b99cb5474cf42ec857f5a1519a1360499d0e652a057681868ae5b0ee618"} Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.506431 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerStarted","Data":"bc03f895c9b268026f2a3a0190885e0efd066488cf22308e6caf642a17ec7893"} Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.562069 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.562036647 podStartE2EDuration="6.562036647s" podCreationTimestamp="2025-12-03 09:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:53.525385328 +0000 UTC m=+1121.708277639" watchObservedRunningTime="2025-12-03 09:30:53.562036647 +0000 UTC m=+1121.744928948" Dec 03 09:30:53 crc kubenswrapper[4856]: I1203 09:30:53.583908 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.583876837 podStartE2EDuration="5.583876837s" podCreationTimestamp="2025-12-03 09:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:53.562729614 +0000 UTC m=+1121.745621925" watchObservedRunningTime="2025-12-03 09:30:53.583876837 +0000 UTC m=+1121.766769138" Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.529240 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-549d5987fb-kphsk" event={"ID":"f92cb955-92c8-46d4-adbf-f8de7330cd2c","Type":"ContainerStarted","Data":"754b2cf159dac48ac7851f1f13a7735ba0a1c3421ec11ddea247b65bde3fecf3"} Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.533793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-866db7fbbf-khsgj" event={"ID":"a52f4628-4166-45bf-893f-98155011723d","Type":"ContainerStarted","Data":"9a3617c313fd7ca99e2ede3937688b168138629e73d74ac07aff871c4b17a1a6"} Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.534938 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.538657 4856 generic.go:334] "Generic (PLEG): container finished" podID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerID="fc0bd12469bc5033edc430b4585c91f3665258ecf2d573ea9d107523331f910b" exitCode=137 Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.538688 4856 generic.go:334] "Generic (PLEG): container finished" podID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerID="d4bc1e99b4ae36a2bc75192793bc4e40a9dc8df477025847c42f2f6fa86d4508" exitCode=137 Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.539555 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerDied","Data":"fc0bd12469bc5033edc430b4585c91f3665258ecf2d573ea9d107523331f910b"} Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.539588 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerDied","Data":"d4bc1e99b4ae36a2bc75192793bc4e40a9dc8df477025847c42f2f6fa86d4508"} Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.563962 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-866db7fbbf-khsgj" podStartSLOduration=3.563933305 podStartE2EDuration="3.563933305s" podCreationTimestamp="2025-12-03 09:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:54.557072308 +0000 UTC m=+1122.739964609" watchObservedRunningTime="2025-12-03 09:30:54.563933305 +0000 UTC m=+1122.746825606" Dec 03 09:30:54 crc kubenswrapper[4856]: I1203 09:30:54.884963 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.041148 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts\") pod \"032e03e8-4998-4765-a8e5-b80f5ad90372\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.041217 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z8vm\" (UniqueName: \"kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm\") pod \"032e03e8-4998-4765-a8e5-b80f5ad90372\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.041339 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key\") pod \"032e03e8-4998-4765-a8e5-b80f5ad90372\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.041411 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data\") pod \"032e03e8-4998-4765-a8e5-b80f5ad90372\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.041455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs\") pod \"032e03e8-4998-4765-a8e5-b80f5ad90372\" (UID: \"032e03e8-4998-4765-a8e5-b80f5ad90372\") " Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.042293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs" (OuterVolumeSpecName: "logs") pod "032e03e8-4998-4765-a8e5-b80f5ad90372" (UID: "032e03e8-4998-4765-a8e5-b80f5ad90372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.071399 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "032e03e8-4998-4765-a8e5-b80f5ad90372" (UID: "032e03e8-4998-4765-a8e5-b80f5ad90372"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.072317 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm" (OuterVolumeSpecName: "kube-api-access-4z8vm") pod "032e03e8-4998-4765-a8e5-b80f5ad90372" (UID: "032e03e8-4998-4765-a8e5-b80f5ad90372"). InnerVolumeSpecName "kube-api-access-4z8vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.075646 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data" (OuterVolumeSpecName: "config-data") pod "032e03e8-4998-4765-a8e5-b80f5ad90372" (UID: "032e03e8-4998-4765-a8e5-b80f5ad90372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.084247 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts" (OuterVolumeSpecName: "scripts") pod "032e03e8-4998-4765-a8e5-b80f5ad90372" (UID: "032e03e8-4998-4765-a8e5-b80f5ad90372"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.144543 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.144593 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z8vm\" (UniqueName: \"kubernetes.io/projected/032e03e8-4998-4765-a8e5-b80f5ad90372-kube-api-access-4z8vm\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.144615 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/032e03e8-4998-4765-a8e5-b80f5ad90372-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.144626 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/032e03e8-4998-4765-a8e5-b80f5ad90372-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.144638 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e03e8-4998-4765-a8e5-b80f5ad90372-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.553962 4856 generic.go:334] "Generic (PLEG): container finished" podID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerID="f7c92fc6193b67cc52cac39374c20437c0be502c91f48db466472c343e7eb68e" exitCode=137 Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.554432 4856 generic.go:334] "Generic (PLEG): container finished" podID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerID="1e4147583c61c530af14c175e04cda874e752fd546e942b5665f2daeecd3e1fc" exitCode=137 Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.554028 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerDied","Data":"f7c92fc6193b67cc52cac39374c20437c0be502c91f48db466472c343e7eb68e"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.554498 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerDied","Data":"1e4147583c61c530af14c175e04cda874e752fd546e942b5665f2daeecd3e1fc"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.558205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-549d5987fb-kphsk" event={"ID":"f92cb955-92c8-46d4-adbf-f8de7330cd2c","Type":"ContainerStarted","Data":"16d71864eaad948402ad87102d9ebd45c530b13cb1166269b770137fb4f920d2"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.558407 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.558460 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.562020 4856 generic.go:334] "Generic (PLEG): container finished" podID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerID="93459fc184cdb63fb1dc5fc7ac130af195be451b4b25efa992d10a4ce944fa52" exitCode=137 Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.562071 4856 generic.go:334] "Generic (PLEG): container finished" podID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerID="2e6d89843b66128e4d4e9a51432fad7cf0620a0f6bd20e2e3a576480d4146fa9" exitCode=137 Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.562071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerDied","Data":"93459fc184cdb63fb1dc5fc7ac130af195be451b4b25efa992d10a4ce944fa52"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.562131 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerDied","Data":"2e6d89843b66128e4d4e9a51432fad7cf0620a0f6bd20e2e3a576480d4146fa9"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.566224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-764b5497d9-dcts9" event={"ID":"032e03e8-4998-4765-a8e5-b80f5ad90372","Type":"ContainerDied","Data":"e71ed2fd88d953b2f8b8ab964d7adf78e78752adbba798390a2c941659b0daed"} Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.566289 4856 scope.go:117] "RemoveContainer" containerID="fc0bd12469bc5033edc430b4585c91f3665258ecf2d573ea9d107523331f910b" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.566324 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-764b5497d9-dcts9" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.621037 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-549d5987fb-kphsk" podStartSLOduration=4.621011411 podStartE2EDuration="4.621011411s" podCreationTimestamp="2025-12-03 09:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:30:55.583593104 +0000 UTC m=+1123.766485405" watchObservedRunningTime="2025-12-03 09:30:55.621011411 +0000 UTC m=+1123.803903712" Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.657979 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.666832 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-764b5497d9-dcts9"] Dec 03 09:30:55 crc kubenswrapper[4856]: I1203 09:30:55.833658 4856 scope.go:117] "RemoveContainer" containerID="d4bc1e99b4ae36a2bc75192793bc4e40a9dc8df477025847c42f2f6fa86d4508" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.181183 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.272188 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.373787 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2z2g\" (UniqueName: \"kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g\") pod \"0cadb6e2-105e-4b63-afeb-77fe7030b233\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.373921 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key\") pod \"0cadb6e2-105e-4b63-afeb-77fe7030b233\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.373961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2jd5\" (UniqueName: \"kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5\") pod \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374097 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data\") pod \"0cadb6e2-105e-4b63-afeb-77fe7030b233\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data\") pod \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374197 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts\") pod \"0cadb6e2-105e-4b63-afeb-77fe7030b233\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374232 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts\") pod \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374287 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key\") pod \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374324 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs\") pod \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\" (UID: \"452932c0-23f1-40ba-8dd8-121eff6a2ea6\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.374371 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs\") pod \"0cadb6e2-105e-4b63-afeb-77fe7030b233\" (UID: \"0cadb6e2-105e-4b63-afeb-77fe7030b233\") " Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.375201 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs" (OuterVolumeSpecName: "logs") pod "0cadb6e2-105e-4b63-afeb-77fe7030b233" (UID: "0cadb6e2-105e-4b63-afeb-77fe7030b233"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.375586 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs" (OuterVolumeSpecName: "logs") pod "452932c0-23f1-40ba-8dd8-121eff6a2ea6" (UID: "452932c0-23f1-40ba-8dd8-121eff6a2ea6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.381182 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g" (OuterVolumeSpecName: "kube-api-access-t2z2g") pod "0cadb6e2-105e-4b63-afeb-77fe7030b233" (UID: "0cadb6e2-105e-4b63-afeb-77fe7030b233"). InnerVolumeSpecName "kube-api-access-t2z2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.381267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0cadb6e2-105e-4b63-afeb-77fe7030b233" (UID: "0cadb6e2-105e-4b63-afeb-77fe7030b233"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.381283 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "452932c0-23f1-40ba-8dd8-121eff6a2ea6" (UID: "452932c0-23f1-40ba-8dd8-121eff6a2ea6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.381319 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5" (OuterVolumeSpecName: "kube-api-access-x2jd5") pod "452932c0-23f1-40ba-8dd8-121eff6a2ea6" (UID: "452932c0-23f1-40ba-8dd8-121eff6a2ea6"). InnerVolumeSpecName "kube-api-access-x2jd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.401897 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data" (OuterVolumeSpecName: "config-data") pod "0cadb6e2-105e-4b63-afeb-77fe7030b233" (UID: "0cadb6e2-105e-4b63-afeb-77fe7030b233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.405733 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data" (OuterVolumeSpecName: "config-data") pod "452932c0-23f1-40ba-8dd8-121eff6a2ea6" (UID: "452932c0-23f1-40ba-8dd8-121eff6a2ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.409057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts" (OuterVolumeSpecName: "scripts") pod "0cadb6e2-105e-4b63-afeb-77fe7030b233" (UID: "0cadb6e2-105e-4b63-afeb-77fe7030b233"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.422772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts" (OuterVolumeSpecName: "scripts") pod "452932c0-23f1-40ba-8dd8-121eff6a2ea6" (UID: "452932c0-23f1-40ba-8dd8-121eff6a2ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476695 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cadb6e2-105e-4b63-afeb-77fe7030b233-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476744 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2z2g\" (UniqueName: \"kubernetes.io/projected/0cadb6e2-105e-4b63-afeb-77fe7030b233-kube-api-access-t2z2g\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476762 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cadb6e2-105e-4b63-afeb-77fe7030b233-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476775 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2jd5\" (UniqueName: \"kubernetes.io/projected/452932c0-23f1-40ba-8dd8-121eff6a2ea6-kube-api-access-x2jd5\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476786 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476797 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476828 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cadb6e2-105e-4b63-afeb-77fe7030b233-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476841 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/452932c0-23f1-40ba-8dd8-121eff6a2ea6-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476852 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/452932c0-23f1-40ba-8dd8-121eff6a2ea6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.476867 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452932c0-23f1-40ba-8dd8-121eff6a2ea6-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.617573 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78fd484f8f-4w5ks" event={"ID":"452932c0-23f1-40ba-8dd8-121eff6a2ea6","Type":"ContainerDied","Data":"6bb14757922a603191359540c0c04aa9445a0918ce87f89f847eb6b1e3ecbed4"} Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.617643 4856 scope.go:117] "RemoveContainer" containerID="93459fc184cdb63fb1dc5fc7ac130af195be451b4b25efa992d10a4ce944fa52" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.617689 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78fd484f8f-4w5ks" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.630261 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6d6478fc-xjpl5" event={"ID":"0cadb6e2-105e-4b63-afeb-77fe7030b233","Type":"ContainerDied","Data":"64b4ef4935e275053f5e76c661d4a3d48b987fa31472c19c9d15af26f36f7f7b"} Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.630351 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6d6478fc-xjpl5" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.732748 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" path="/var/lib/kubelet/pods/032e03e8-4998-4765-a8e5-b80f5ad90372/volumes" Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.733899 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.746877 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78fd484f8f-4w5ks"] Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.765207 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:30:56 crc kubenswrapper[4856]: I1203 09:30:56.779289 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c6d6478fc-xjpl5"] Dec 03 09:30:57 crc kubenswrapper[4856]: I1203 09:30:57.841051 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:30:57 crc kubenswrapper[4856]: I1203 09:30:57.952693 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.471535 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.471585 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.507550 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.532356 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.532410 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.577090 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.580005 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.588120 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.655161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.655539 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.655577 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.655589 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.703837 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" path="/var/lib/kubelet/pods/0cadb6e2-105e-4b63-afeb-77fe7030b233/volumes" Dec 03 09:30:58 crc kubenswrapper[4856]: I1203 09:30:58.705317 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" path="/var/lib/kubelet/pods/452932c0-23f1-40ba-8dd8-121eff6a2ea6/volumes" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.018798 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.278388 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-569f95b-qhsts" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.414418 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.680643 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.680677 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.680752 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.680787 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.681358 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon-log" containerID="cri-o://8e8d8d0e4f752b03d3faa4fb027bc5872ed5ce85678c10a5a108fa2cb35e40a3" gracePeriod=30 Dec 03 09:31:00 crc kubenswrapper[4856]: I1203 09:31:00.681679 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" containerID="cri-o://279dc7b9f3a417f2f052c9b05e4582beabe488060296a80e641c58f25f734338" gracePeriod=30 Dec 03 09:31:01 crc kubenswrapper[4856]: I1203 09:31:01.832871 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 09:31:01 crc kubenswrapper[4856]: I1203 09:31:01.833612 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:02 crc kubenswrapper[4856]: I1203 09:31:02.085649 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:02 crc kubenswrapper[4856]: I1203 09:31:02.085791 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:31:02 crc kubenswrapper[4856]: I1203 09:31:02.172986 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 09:31:02 crc kubenswrapper[4856]: I1203 09:31:02.284224 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:04 crc kubenswrapper[4856]: I1203 09:31:04.722817 4856 generic.go:334] "Generic (PLEG): container finished" podID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerID="279dc7b9f3a417f2f052c9b05e4582beabe488060296a80e641c58f25f734338" exitCode=0 Dec 03 09:31:04 crc kubenswrapper[4856]: I1203 09:31:04.722888 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerDied","Data":"279dc7b9f3a417f2f052c9b05e4582beabe488060296a80e641c58f25f734338"} Dec 03 09:31:05 crc kubenswrapper[4856]: I1203 09:31:05.240413 4856 scope.go:117] "RemoveContainer" containerID="2e6d89843b66128e4d4e9a51432fad7cf0620a0f6bd20e2e3a576480d4146fa9" Dec 03 09:31:05 crc kubenswrapper[4856]: I1203 09:31:05.630090 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 03 09:31:06 crc kubenswrapper[4856]: I1203 09:31:06.295235 4856 scope.go:117] "RemoveContainer" containerID="f7c92fc6193b67cc52cac39374c20437c0be502c91f48db466472c343e7eb68e" Dec 03 09:31:06 crc kubenswrapper[4856]: E1203 09:31:06.519726 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 03 09:31:06 crc kubenswrapper[4856]: E1203 09:31:06.520046 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d683d201-f027-4492-966c-95fa0e5004cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 09:31:06 crc kubenswrapper[4856]: E1203 09:31:06.521439 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="d683d201-f027-4492-966c-95fa0e5004cd" Dec 03 09:31:06 crc kubenswrapper[4856]: I1203 09:31:06.522792 4856 scope.go:117] "RemoveContainer" containerID="1e4147583c61c530af14c175e04cda874e752fd546e942b5665f2daeecd3e1fc" Dec 03 09:31:06 crc kubenswrapper[4856]: I1203 09:31:06.750256 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="ceilometer-notification-agent" containerID="cri-o://6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8" gracePeriod=30 Dec 03 09:31:06 crc kubenswrapper[4856]: I1203 09:31:06.750772 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="sg-core" containerID="cri-o://a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10" gracePeriod=30 Dec 03 09:31:07 crc kubenswrapper[4856]: I1203 09:31:07.763054 4856 generic.go:334] "Generic (PLEG): container finished" podID="a3f84948-d98d-443d-9055-b4f4d28369b4" containerID="2a064e72a881dcc83802686f8d296d22eac5f29ee3afa83933593ff5378ac8ad" exitCode=0 Dec 03 09:31:07 crc kubenswrapper[4856]: I1203 09:31:07.763157 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9lg2" event={"ID":"a3f84948-d98d-443d-9055-b4f4d28369b4","Type":"ContainerDied","Data":"2a064e72a881dcc83802686f8d296d22eac5f29ee3afa83933593ff5378ac8ad"} Dec 03 09:31:07 crc kubenswrapper[4856]: I1203 09:31:07.766644 4856 generic.go:334] "Generic (PLEG): container finished" podID="d683d201-f027-4492-966c-95fa0e5004cd" containerID="a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10" exitCode=2 Dec 03 09:31:07 crc kubenswrapper[4856]: I1203 09:31:07.766686 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerDied","Data":"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10"} Dec 03 09:31:08 crc kubenswrapper[4856]: I1203 09:31:08.779259 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" containerID="87d0121ed930968e361a811d3a8f9e0024155ec9e7462791a5ceb1a36a1128db" exitCode=0 Dec 03 09:31:08 crc kubenswrapper[4856]: I1203 09:31:08.779434 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxpjp" event={"ID":"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0","Type":"ContainerDied","Data":"87d0121ed930968e361a811d3a8f9e0024155ec9e7462791a5ceb1a36a1128db"} Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.264130 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.356643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data\") pod \"a3f84948-d98d-443d-9055-b4f4d28369b4\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.356834 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle\") pod \"a3f84948-d98d-443d-9055-b4f4d28369b4\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.357056 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2shnl\" (UniqueName: \"kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl\") pod \"a3f84948-d98d-443d-9055-b4f4d28369b4\" (UID: \"a3f84948-d98d-443d-9055-b4f4d28369b4\") " Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.364908 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl" (OuterVolumeSpecName: "kube-api-access-2shnl") pod "a3f84948-d98d-443d-9055-b4f4d28369b4" (UID: "a3f84948-d98d-443d-9055-b4f4d28369b4"). InnerVolumeSpecName "kube-api-access-2shnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.369971 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a3f84948-d98d-443d-9055-b4f4d28369b4" (UID: "a3f84948-d98d-443d-9055-b4f4d28369b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.391441 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3f84948-d98d-443d-9055-b4f4d28369b4" (UID: "a3f84948-d98d-443d-9055-b4f4d28369b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.460251 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.460294 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2shnl\" (UniqueName: \"kubernetes.io/projected/a3f84948-d98d-443d-9055-b4f4d28369b4-kube-api-access-2shnl\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.460309 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3f84948-d98d-443d-9055-b4f4d28369b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.822396 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9lg2" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.822392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9lg2" event={"ID":"a3f84948-d98d-443d-9055-b4f4d28369b4","Type":"ContainerDied","Data":"730245bd63fa1744d67ed792d04719d551cfac2708335df18207a97649f98bb9"} Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.822603 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730245bd63fa1744d67ed792d04719d551cfac2708335df18207a97649f98bb9" Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.829325 4856 generic.go:334] "Generic (PLEG): container finished" podID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" containerID="6c417a1d3c1e90d9d598e43f026833def2a0ca935265a2f958419f54a4dd891a" exitCode=0 Dec 03 09:31:09 crc kubenswrapper[4856]: I1203 09:31:09.829431 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jpsww" event={"ID":"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b","Type":"ContainerDied","Data":"6c417a1d3c1e90d9d598e43f026833def2a0ca935265a2f958419f54a4dd891a"} Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.119912 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-545b57f4f4-cmb44"] Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121175 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121197 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121231 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" containerName="barbican-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121237 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" containerName="barbican-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121256 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121262 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121274 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121280 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121294 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121301 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121330 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121335 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.121345 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121351 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121558 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121572 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121594 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cadb6e2-105e-4b63-afeb-77fe7030b233" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121601 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121612 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="452932c0-23f1-40ba-8dd8-121eff6a2ea6" containerName="horizon" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121623 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" containerName="barbican-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.121632 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="032e03e8-4998-4765-a8e5-b80f5ad90372" containerName="horizon-log" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.122937 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.129782 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.136714 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xjbjr" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.137002 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.155358 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68d6cb77d9-4m8kq"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.157396 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.160217 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.174268 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68d6cb77d9-4m8kq"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.178229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data-custom\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.178383 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e157e2-2dcf-4664-9b48-1e6186729ef0-logs\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.178450 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rss8v\" (UniqueName: \"kubernetes.io/projected/a8e157e2-2dcf-4664-9b48-1e6186729ef0-kube-api-access-rss8v\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.178526 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-combined-ca-bundle\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.178575 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.194056 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-545b57f4f4-cmb44"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280121 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-combined-ca-bundle\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfzps\" (UniqueName: \"kubernetes.io/projected/1a742807-921a-47f8-883b-10c4b972c350-kube-api-access-wfzps\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data-custom\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280349 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280385 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data-custom\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280418 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e157e2-2dcf-4664-9b48-1e6186729ef0-logs\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280460 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rss8v\" (UniqueName: \"kubernetes.io/projected/a8e157e2-2dcf-4664-9b48-1e6186729ef0-kube-api-access-rss8v\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280484 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a742807-921a-47f8-883b-10c4b972c350-logs\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.280528 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-combined-ca-bundle\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.288632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8e157e2-2dcf-4664-9b48-1e6186729ef0-logs\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.292459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-combined-ca-bundle\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.292608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.294918 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8e157e2-2dcf-4664-9b48-1e6186729ef0-config-data-custom\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.323326 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.325345 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.328767 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rss8v\" (UniqueName: \"kubernetes.io/projected/a8e157e2-2dcf-4664-9b48-1e6186729ef0-kube-api-access-rss8v\") pod \"barbican-keystone-listener-545b57f4f4-cmb44\" (UID: \"a8e157e2-2dcf-4664-9b48-1e6186729ef0\") " pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.366526 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.380605 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383759 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-combined-ca-bundle\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383822 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfzps\" (UniqueName: \"kubernetes.io/projected/1a742807-921a-47f8-883b-10c4b972c350-kube-api-access-wfzps\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383958 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.383979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhc6\" (UniqueName: \"kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.384022 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.384047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data-custom\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.384075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a742807-921a-47f8-883b-10c4b972c350-logs\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.384111 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.388210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-combined-ca-bundle\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.388480 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a742807-921a-47f8-883b-10c4b972c350-logs\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.393294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data-custom\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.393726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a742807-921a-47f8-883b-10c4b972c350-config-data\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.418490 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfzps\" (UniqueName: \"kubernetes.io/projected/1a742807-921a-47f8-883b-10c4b972c350-kube-api-access-wfzps\") pod \"barbican-worker-68d6cb77d9-4m8kq\" (UID: \"1a742807-921a-47f8-883b-10c4b972c350\") " pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.468125 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.497941 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config\") pod \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.498336 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle\") pod \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.498467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z42zz\" (UniqueName: \"kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz\") pod \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\" (UID: \"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0\") " Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.499847 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.500323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.501134 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.501315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.501395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.501475 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhc6\" (UniqueName: \"kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.505337 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.508004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.508784 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.511711 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.513698 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.514553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.539224 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:10 crc kubenswrapper[4856]: E1203 09:31:10.540850 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" containerName="neutron-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.540885 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" containerName="neutron-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.541341 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" containerName="neutron-db-sync" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.541782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz" (OuterVolumeSpecName: "kube-api-access-z42zz") pod "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" (UID: "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0"). InnerVolumeSpecName "kube-api-access-z42zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.543511 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.547398 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhc6\" (UniqueName: \"kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6\") pod \"dnsmasq-dns-7c67bffd47-blz8g\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.552519 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.559786 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config" (OuterVolumeSpecName: "config") pod "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" (UID: "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.585196 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.612580 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" (UID: "3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.616490 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.616855 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.617057 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.617465 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.617602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.617924 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.618468 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.618564 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z42zz\" (UniqueName: \"kubernetes.io/projected/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0-kube-api-access-z42zz\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.722760 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.722831 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.722888 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.722974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.723007 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.724463 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.729597 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.732011 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.734476 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.753524 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp\") pod \"barbican-api-77564f8754-gb7wv\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.777395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.854109 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxpjp" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.855218 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxpjp" event={"ID":"3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0","Type":"ContainerDied","Data":"2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287"} Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.855255 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb0c0ebf364311f7b7a7fad0ee1262f0a4fd264a539564dbef1a6e87328a287" Dec 03 09:31:10 crc kubenswrapper[4856]: I1203 09:31:10.895770 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.096824 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.135515 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.146527 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.167300 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.214902 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-545b57f4f4-cmb44"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.231118 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.244792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245524 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245627 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrwv\" (UniqueName: \"kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245663 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245688 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.245862 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.254784 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.256500 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rwwvc" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.263452 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.271214 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.292222 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.308506 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68d6cb77d9-4m8kq"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393521 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393654 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393880 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctl5z\" (UniqueName: \"kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393910 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrwv\" (UniqueName: \"kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.393975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.394012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.394037 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.394272 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.395664 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.396742 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.398539 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.399244 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.417872 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.464461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrwv\" (UniqueName: \"kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv\") pod \"dnsmasq-dns-848cf88cfc-zfzjf\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.515592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.515664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.515717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctl5z\" (UniqueName: \"kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.515742 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.515772 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.517533 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.538539 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.548001 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.559880 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctl5z\" (UniqueName: \"kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.561327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.569209 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs\") pod \"neutron-74fd85d868-sh6bv\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.678974 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jpsww" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.748093 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.824944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.825671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.825874 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j574\" (UniqueName: \"kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.825920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.826008 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.826029 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle\") pod \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\" (UID: \"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b\") " Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.827397 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.838063 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.843280 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574" (OuterVolumeSpecName: "kube-api-access-8j574") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "kube-api-access-8j574". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.854347 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.863044 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts" (OuterVolumeSpecName: "scripts") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.863632 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.910305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" event={"ID":"a8e157e2-2dcf-4664-9b48-1e6186729ef0","Type":"ContainerStarted","Data":"670c5793f6680f81ba2581bb6c99ac6838b1aea410584eaa25b18f284a413ba8"} Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.916941 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" event={"ID":"1a742807-921a-47f8-883b-10c4b972c350","Type":"ContainerStarted","Data":"723531a7d45862f4342edea014ab91caf4efe3eb99185156c8b15950c44d8374"} Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.931419 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j574\" (UniqueName: \"kubernetes.io/projected/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-kube-api-access-8j574\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.931447 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.931457 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.931470 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.931479 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.936229 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data" (OuterVolumeSpecName: "config-data") pod "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" (UID: "74e3fff2-b7c3-4cd4-b62e-83af7da6e87b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.937487 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jpsww" event={"ID":"74e3fff2-b7c3-4cd4-b62e-83af7da6e87b","Type":"ContainerDied","Data":"de8bfd4ca7ffb1d5ada8173f52774ead63b497db6f789ecfec9bae839d66395b"} Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.937568 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de8bfd4ca7ffb1d5ada8173f52774ead63b497db6f789ecfec9bae839d66395b" Dec 03 09:31:11 crc kubenswrapper[4856]: I1203 09:31:11.937789 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jpsww" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.033805 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.093071 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.233132 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:12 crc kubenswrapper[4856]: E1203 09:31:12.233729 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" containerName="cinder-db-sync" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.233745 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" containerName="cinder-db-sync" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.234040 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" containerName="cinder-db-sync" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.257399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.258847 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.265942 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.266979 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.267858 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.323415 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.385732 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-td4fj" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392126 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392185 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392412 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392551 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.392699 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdl7c\" (UniqueName: \"kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.421832 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.471977 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.485343 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498375 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498439 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498505 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498560 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.498670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdl7c\" (UniqueName: \"kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.499228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.513495 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.516751 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.528302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.529109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.545816 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdl7c\" (UniqueName: \"kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c\") pod \"cinder-scheduler-0\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.590054 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.601515 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.601939 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.602105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvsl\" (UniqueName: \"kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.602187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.602327 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.602489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.615952 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.620006 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.629860 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.689357 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704467 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704554 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gkn\" (UniqueName: \"kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704606 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvsl\" (UniqueName: \"kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704756 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704822 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704867 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704897 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.704977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.705008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.705032 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.707586 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.707614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.712293 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.712454 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.713655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: W1203 09:31:12.737785 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode119392f_94d6_436b_ac48_e548d91a8f0a.slice/crio-3287c045fd93c4492180d8df98211aab998381229f00f41c160a1d6d7808bb03 WatchSource:0}: Error finding container 3287c045fd93c4492180d8df98211aab998381229f00f41c160a1d6d7808bb03: Status 404 returned error can't find the container with id 3287c045fd93c4492180d8df98211aab998381229f00f41c160a1d6d7808bb03 Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.751658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvsl\" (UniqueName: \"kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl\") pod \"dnsmasq-dns-6578955fd5-cj2hk\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.771654 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.806649 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gkn\" (UniqueName: \"kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.809343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.809679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.809804 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.809921 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.810054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.810140 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.821050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.824124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.824245 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.827464 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.838506 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.844948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.851933 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gkn\" (UniqueName: \"kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn\") pod \"cinder-api-0\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " pod="openstack/cinder-api-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.961291 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.969052 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" event={"ID":"76979c25-dc19-43a7-ab81-99a9c7097e20","Type":"ContainerStarted","Data":"59204131f67a7e189829fa353dcba565a054ddaa121b056da67721ca566fb3d0"} Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.992141 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" event={"ID":"206e6135-3677-45b6-96b6-3320bce29cd3","Type":"ContainerStarted","Data":"7bdbd5bba85cef63ac369c799e3f50420af6cadd4183125feda6636fbd79f232"} Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.992206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" event={"ID":"206e6135-3677-45b6-96b6-3320bce29cd3","Type":"ContainerStarted","Data":"1109485833de47b4970fe3c81148f3d5cf49ab85ae09cc6cc9fdea8e72a5a364"} Dec 03 09:31:12 crc kubenswrapper[4856]: I1203 09:31:12.997632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerStarted","Data":"3287c045fd93c4492180d8df98211aab998381229f00f41c160a1d6d7808bb03"} Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.000838 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerStarted","Data":"4f846bde358e5f6a8287bcbe548e3cb0622f47e869d3f9aa1ff568df334697c7"} Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.108065 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.148706 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:13 crc kubenswrapper[4856]: E1203 09:31:13.507415 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76979c25_dc19_43a7_ab81_99a9c7097e20.slice/crio-017574377e52341de1b353e7f084e41e0b84fe3f03c2131dfb310f16b9ec4079.scope\": RecentStats: unable to find data in memory cache]" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.766514 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.827945 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871005 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871341 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871399 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871427 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871591 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j7m5\" (UniqueName: \"kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871662 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.871733 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml\") pod \"d683d201-f027-4492-966c-95fa0e5004cd\" (UID: \"d683d201-f027-4492-966c-95fa0e5004cd\") " Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.875928 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.876225 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.879487 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts" (OuterVolumeSpecName: "scripts") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.893176 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5" (OuterVolumeSpecName: "kube-api-access-8j7m5") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "kube-api-access-8j7m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.915422 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.920156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data" (OuterVolumeSpecName: "config-data") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.920425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d683d201-f027-4492-966c-95fa0e5004cd" (UID: "d683d201-f027-4492-966c-95fa0e5004cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977739 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977778 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977788 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d683d201-f027-4492-966c-95fa0e5004cd-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977798 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977831 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j7m5\" (UniqueName: \"kubernetes.io/projected/d683d201-f027-4492-966c-95fa0e5004cd-kube-api-access-8j7m5\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977842 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:13 crc kubenswrapper[4856]: I1203 09:31:13.977851 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d683d201-f027-4492-966c-95fa0e5004cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.085642 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.088197 4856 generic.go:334] "Generic (PLEG): container finished" podID="76979c25-dc19-43a7-ab81-99a9c7097e20" containerID="017574377e52341de1b353e7f084e41e0b84fe3f03c2131dfb310f16b9ec4079" exitCode=0 Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.088327 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" event={"ID":"76979c25-dc19-43a7-ab81-99a9c7097e20","Type":"ContainerDied","Data":"017574377e52341de1b353e7f084e41e0b84fe3f03c2131dfb310f16b9ec4079"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.120696 4856 generic.go:334] "Generic (PLEG): container finished" podID="206e6135-3677-45b6-96b6-3320bce29cd3" containerID="7bdbd5bba85cef63ac369c799e3f50420af6cadd4183125feda6636fbd79f232" exitCode=0 Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.120888 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" event={"ID":"206e6135-3677-45b6-96b6-3320bce29cd3","Type":"ContainerDied","Data":"7bdbd5bba85cef63ac369c799e3f50420af6cadd4183125feda6636fbd79f232"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.163527 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerStarted","Data":"89de9afc15304197d4096c2102f77a838bc3f096ae738c480b0548a5b97fd708"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.163583 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerStarted","Data":"e112b0ed988513a47362d5c192e3a59fda82b7af151a23e36864a8eb36d0c0fd"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.165112 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.197845 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerStarted","Data":"bcf5134c16011651b10da761e1d3f90b81db0cbda437b67e77402c5be32bc05b"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.197895 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerStarted","Data":"0290249b09cb99f4ed3bc6392610beaf97a267b4c619c39526d4742267bddb1a"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.198836 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.198868 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.205423 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerStarted","Data":"835a4a147223c5995563b2b6fbb89c371df0cb3ab154ff9078211cc39f9fb5a1"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.237872 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.240183 4856 generic.go:334] "Generic (PLEG): container finished" podID="d683d201-f027-4492-966c-95fa0e5004cd" containerID="6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8" exitCode=0 Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.241746 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerDied","Data":"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.241796 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d683d201-f027-4492-966c-95fa0e5004cd","Type":"ContainerDied","Data":"ff6b78251b071bd45823f4c3bd24de9f20dc1b24ebc3698d47aed4a6f9e106f1"} Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.241837 4856 scope.go:117] "RemoveContainer" containerID="a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.242106 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.258592 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74fd85d868-sh6bv" podStartSLOduration=3.258558162 podStartE2EDuration="3.258558162s" podCreationTimestamp="2025-12-03 09:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:14.228774369 +0000 UTC m=+1142.411666670" watchObservedRunningTime="2025-12-03 09:31:14.258558162 +0000 UTC m=+1142.441450463" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.334796 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77564f8754-gb7wv" podStartSLOduration=4.33476412 podStartE2EDuration="4.33476412s" podCreationTimestamp="2025-12-03 09:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:14.294599986 +0000 UTC m=+1142.477492287" watchObservedRunningTime="2025-12-03 09:31:14.33476412 +0000 UTC m=+1142.517656421" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.428690 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.488953 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.516902 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:14 crc kubenswrapper[4856]: E1203 09:31:14.517652 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="ceilometer-notification-agent" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.517672 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="ceilometer-notification-agent" Dec 03 09:31:14 crc kubenswrapper[4856]: E1203 09:31:14.517690 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="sg-core" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.517698 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="sg-core" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.518012 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="sg-core" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.518028 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d683d201-f027-4492-966c-95fa0e5004cd" containerName="ceilometer-notification-agent" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.520706 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.521320 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.538499 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.539738 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596538 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596689 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.596753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.597348 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbf4v\" (UniqueName: \"kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.709927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbf4v\" (UniqueName: \"kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.710970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.711071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.711204 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.711296 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.711339 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.716242 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d683d201-f027-4492-966c-95fa0e5004cd" path="/var/lib/kubelet/pods/d683d201-f027-4492-966c-95fa0e5004cd/volumes" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.717963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.718753 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.720267 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.720560 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.723738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.725546 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.738409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbf4v\" (UniqueName: \"kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.741572 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " pod="openstack/ceilometer-0" Dec 03 09:31:14 crc kubenswrapper[4856]: I1203 09:31:14.886573 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:15 crc kubenswrapper[4856]: W1203 09:31:15.066590 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db70cd6_4b7f_4586_8f81_37066c3ef690.slice/crio-e4b357c4c453106eb6bb7bc77885eb517bbb47ea322022cf6b506ce0eb82b26f WatchSource:0}: Error finding container e4b357c4c453106eb6bb7bc77885eb517bbb47ea322022cf6b506ce0eb82b26f: Status 404 returned error can't find the container with id e4b357c4c453106eb6bb7bc77885eb517bbb47ea322022cf6b506ce0eb82b26f Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.185372 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.219439 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.256320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" event={"ID":"76979c25-dc19-43a7-ab81-99a9c7097e20","Type":"ContainerDied","Data":"59204131f67a7e189829fa353dcba565a054ddaa121b056da67721ca566fb3d0"} Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.256478 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zfzjf" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.261118 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" event={"ID":"5db70cd6-4b7f-4586-8f81-37066c3ef690","Type":"ContainerStarted","Data":"e4b357c4c453106eb6bb7bc77885eb517bbb47ea322022cf6b506ce0eb82b26f"} Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349526 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrwv\" (UniqueName: \"kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349737 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.349789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb\") pod \"76979c25-dc19-43a7-ab81-99a9c7097e20\" (UID: \"76979c25-dc19-43a7-ab81-99a9c7097e20\") " Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.359980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv" (OuterVolumeSpecName: "kube-api-access-vcrwv") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "kube-api-access-vcrwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.400673 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrwv\" (UniqueName: \"kubernetes.io/projected/76979c25-dc19-43a7-ab81-99a9c7097e20-kube-api-access-vcrwv\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.410178 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config" (OuterVolumeSpecName: "config") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.414045 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.426091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.429091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.496582 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76979c25-dc19-43a7-ab81-99a9c7097e20" (UID: "76979c25-dc19-43a7-ab81-99a9c7097e20"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.508088 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.508162 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.508208 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.508223 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.508235 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76979c25-dc19-43a7-ab81-99a9c7097e20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.620479 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.630369 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 03 09:31:15 crc kubenswrapper[4856]: I1203 09:31:15.633542 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zfzjf"] Dec 03 09:31:15 crc kubenswrapper[4856]: W1203 09:31:15.992272 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf71c3b_9863_4c05_84a1_ef3a684b3904.slice/crio-5b090a5eb54045c1d1fc16ce38cdd6d67301a92d7e8363d1347113f7eed57422 WatchSource:0}: Error finding container 5b090a5eb54045c1d1fc16ce38cdd6d67301a92d7e8363d1347113f7eed57422: Status 404 returned error can't find the container with id 5b090a5eb54045c1d1fc16ce38cdd6d67301a92d7e8363d1347113f7eed57422 Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.011694 4856 scope.go:117] "RemoveContainer" containerID="6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.111177 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.222662 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.222795 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.222971 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.223011 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.223031 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clhc6\" (UniqueName: \"kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.223072 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc\") pod \"206e6135-3677-45b6-96b6-3320bce29cd3\" (UID: \"206e6135-3677-45b6-96b6-3320bce29cd3\") " Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.229154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6" (OuterVolumeSpecName: "kube-api-access-clhc6") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "kube-api-access-clhc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.248065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.249142 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config" (OuterVolumeSpecName: "config") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.249470 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.253102 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.259624 4856 scope.go:117] "RemoveContainer" containerID="a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10" Dec 03 09:31:16 crc kubenswrapper[4856]: E1203 09:31:16.260349 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10\": container with ID starting with a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10 not found: ID does not exist" containerID="a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.260400 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10"} err="failed to get container status \"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10\": rpc error: code = NotFound desc = could not find container \"a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10\": container with ID starting with a58fae6eebaadfc0c9a75980216376abecc014b05b8c2bebd699b64809139d10 not found: ID does not exist" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.260430 4856 scope.go:117] "RemoveContainer" containerID="6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8" Dec 03 09:31:16 crc kubenswrapper[4856]: E1203 09:31:16.260707 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8\": container with ID starting with 6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8 not found: ID does not exist" containerID="6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.260726 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8"} err="failed to get container status \"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8\": rpc error: code = NotFound desc = could not find container \"6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8\": container with ID starting with 6202af332af844f2b62bf2ea6a92581e7e489fa1f2451496e0f5b8db5969b5e8 not found: ID does not exist" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.260737 4856 scope.go:117] "RemoveContainer" containerID="017574377e52341de1b353e7f084e41e0b84fe3f03c2131dfb310f16b9ec4079" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.262642 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "206e6135-3677-45b6-96b6-3320bce29cd3" (UID: "206e6135-3677-45b6-96b6-3320bce29cd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.271031 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerStarted","Data":"5b090a5eb54045c1d1fc16ce38cdd6d67301a92d7e8363d1347113f7eed57422"} Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.273200 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.273377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-blz8g" event={"ID":"206e6135-3677-45b6-96b6-3320bce29cd3","Type":"ContainerDied","Data":"1109485833de47b4970fe3c81148f3d5cf49ab85ae09cc6cc9fdea8e72a5a364"} Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.310739 4856 scope.go:117] "RemoveContainer" containerID="7bdbd5bba85cef63ac369c799e3f50420af6cadd4183125feda6636fbd79f232" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328382 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328411 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328457 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328469 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328480 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clhc6\" (UniqueName: \"kubernetes.io/projected/206e6135-3677-45b6-96b6-3320bce29cd3-kube-api-access-clhc6\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.328488 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/206e6135-3677-45b6-96b6-3320bce29cd3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.353274 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.362700 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-blz8g"] Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.718671 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206e6135-3677-45b6-96b6-3320bce29cd3" path="/var/lib/kubelet/pods/206e6135-3677-45b6-96b6-3320bce29cd3/volumes" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.719985 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76979c25-dc19-43a7-ab81-99a9c7097e20" path="/var/lib/kubelet/pods/76979c25-dc19-43a7-ab81-99a9c7097e20/volumes" Dec 03 09:31:16 crc kubenswrapper[4856]: I1203 09:31:16.964242 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:16 crc kubenswrapper[4856]: W1203 09:31:16.964939 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4292f186_8667_43a4_90bb_1f5202e3d7c7.slice/crio-5d1fa9813713b0237856a148ac8744f3afee6fda4e545472f3f4e9ec9309ccda WatchSource:0}: Error finding container 5d1fa9813713b0237856a148ac8744f3afee6fda4e545472f3f4e9ec9309ccda: Status 404 returned error can't find the container with id 5d1fa9813713b0237856a148ac8744f3afee6fda4e545472f3f4e9ec9309ccda Dec 03 09:31:17 crc kubenswrapper[4856]: I1203 09:31:17.314323 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" event={"ID":"a8e157e2-2dcf-4664-9b48-1e6186729ef0","Type":"ContainerStarted","Data":"213e1f5d5b3b432ac9038dc0e0df5984409231d5bf00c140df0ce394bd355adb"} Dec 03 09:31:17 crc kubenswrapper[4856]: I1203 09:31:17.342423 4856 generic.go:334] "Generic (PLEG): container finished" podID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerID="3812b55f5fafcd6902bec9b9bbf41673d09b01072e5d25d9ea9026640bbc2927" exitCode=0 Dec 03 09:31:17 crc kubenswrapper[4856]: I1203 09:31:17.344436 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" event={"ID":"5db70cd6-4b7f-4586-8f81-37066c3ef690","Type":"ContainerDied","Data":"3812b55f5fafcd6902bec9b9bbf41673d09b01072e5d25d9ea9026640bbc2927"} Dec 03 09:31:17 crc kubenswrapper[4856]: I1203 09:31:17.373189 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" event={"ID":"1a742807-921a-47f8-883b-10c4b972c350","Type":"ContainerStarted","Data":"4a66cbb16a16b0f66eae2641cd5ba2e2aef27ee8cedae00c56c56127e65ee466"} Dec 03 09:31:17 crc kubenswrapper[4856]: I1203 09:31:17.378247 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerStarted","Data":"5d1fa9813713b0237856a148ac8744f3afee6fda4e545472f3f4e9ec9309ccda"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.170605 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7844d7bfd9-p972t"] Dec 03 09:31:18 crc kubenswrapper[4856]: E1203 09:31:18.171281 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76979c25-dc19-43a7-ab81-99a9c7097e20" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.171296 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="76979c25-dc19-43a7-ab81-99a9c7097e20" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: E1203 09:31:18.171319 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206e6135-3677-45b6-96b6-3320bce29cd3" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.171325 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="206e6135-3677-45b6-96b6-3320bce29cd3" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.173771 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="76979c25-dc19-43a7-ab81-99a9c7097e20" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.173840 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="206e6135-3677-45b6-96b6-3320bce29cd3" containerName="init" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.175562 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.182068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7844d7bfd9-p972t"] Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.188340 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.188875 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.330783 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-public-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-internal-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331367 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc9v\" (UniqueName: \"kubernetes.io/projected/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-kube-api-access-gfc9v\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-combined-ca-bundle\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331423 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-httpd-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.331527 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-ovndb-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.396571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerStarted","Data":"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.398891 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerStarted","Data":"a57f8b98c6967f5209e25e844848a1aaa26e2e7f3fddc604b1cabaa765716db3"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.400841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" event={"ID":"a8e157e2-2dcf-4664-9b48-1e6186729ef0","Type":"ContainerStarted","Data":"735372656acfa4bf7135d8f9153744cf9b0beca52ea17f72753a003a98b35aaf"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.404174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerStarted","Data":"8eee48455182d87ac46dd75dc4db1bae60c7edc2cf8c1e15b2f893b3b35951ad"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.420361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" event={"ID":"5db70cd6-4b7f-4586-8f81-37066c3ef690","Type":"ContainerStarted","Data":"9848719d68973976d64288dc87eb709b07be96f0b7cc3cdc30176f4d28a1ce98"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.421405 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.423280 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-545b57f4f4-cmb44" podStartSLOduration=3.162450038 podStartE2EDuration="8.423265603s" podCreationTimestamp="2025-12-03 09:31:10 +0000 UTC" firstStartedPulling="2025-12-03 09:31:11.328836403 +0000 UTC m=+1139.511728704" lastFinishedPulling="2025-12-03 09:31:16.589651968 +0000 UTC m=+1144.772544269" observedRunningTime="2025-12-03 09:31:18.420746902 +0000 UTC m=+1146.603639203" watchObservedRunningTime="2025-12-03 09:31:18.423265603 +0000 UTC m=+1146.606157904" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-public-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-internal-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433194 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfc9v\" (UniqueName: \"kubernetes.io/projected/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-kube-api-access-gfc9v\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-combined-ca-bundle\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-httpd-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.433364 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-ovndb-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.445592 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-httpd-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.446202 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-ovndb-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.446703 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-public-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.448670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" event={"ID":"1a742807-921a-47f8-883b-10c4b972c350","Type":"ContainerStarted","Data":"cd0e4742f8bfa5342341b5079cbdd6f85bffd958efc25ee1590073817533053f"} Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.451654 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-config\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.454745 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-internal-tls-certs\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.466720 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-combined-ca-bundle\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.471572 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" podStartSLOduration=6.471543354 podStartE2EDuration="6.471543354s" podCreationTimestamp="2025-12-03 09:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:18.464415541 +0000 UTC m=+1146.647307882" watchObservedRunningTime="2025-12-03 09:31:18.471543354 +0000 UTC m=+1146.654435645" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.484689 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfc9v\" (UniqueName: \"kubernetes.io/projected/d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8-kube-api-access-gfc9v\") pod \"neutron-7844d7bfd9-p972t\" (UID: \"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8\") " pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.494848 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68d6cb77d9-4m8kq" podStartSLOduration=3.563521048 podStartE2EDuration="8.494791408s" podCreationTimestamp="2025-12-03 09:31:10 +0000 UTC" firstStartedPulling="2025-12-03 09:31:11.43630879 +0000 UTC m=+1139.619201101" lastFinishedPulling="2025-12-03 09:31:16.36757916 +0000 UTC m=+1144.550471461" observedRunningTime="2025-12-03 09:31:18.485954804 +0000 UTC m=+1146.668847115" watchObservedRunningTime="2025-12-03 09:31:18.494791408 +0000 UTC m=+1146.677683709" Dec 03 09:31:18 crc kubenswrapper[4856]: I1203 09:31:18.536336 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.416088 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7844d7bfd9-p972t"] Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.486503 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerStarted","Data":"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c"} Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.486746 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api-log" containerID="cri-o://bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40" gracePeriod=30 Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.486928 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.487141 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api" containerID="cri-o://e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c" gracePeriod=30 Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.508638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerStarted","Data":"36693f9fbff707ce89e1c2f425e618ba88c228c705b50f4de9449adf5191ea68"} Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.519334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerStarted","Data":"ce9328ea250e9d9f0906256aabef49f7b136f1f874777f4f10f88992d48558a2"} Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.528637 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7844d7bfd9-p972t" event={"ID":"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8","Type":"ContainerStarted","Data":"b4d01786c161533660ec385c053105db057d281c3f5cd2b41e8647f310f8e535"} Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.544415 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.544377443 podStartE2EDuration="7.544377443s" podCreationTimestamp="2025-12-03 09:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:19.537866895 +0000 UTC m=+1147.720759206" watchObservedRunningTime="2025-12-03 09:31:19.544377443 +0000 UTC m=+1147.727269744" Dec 03 09:31:19 crc kubenswrapper[4856]: I1203 09:31:19.577604 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.533645118 podStartE2EDuration="7.577572538s" podCreationTimestamp="2025-12-03 09:31:12 +0000 UTC" firstStartedPulling="2025-12-03 09:31:13.827403892 +0000 UTC m=+1142.010296193" lastFinishedPulling="2025-12-03 09:31:16.871331312 +0000 UTC m=+1145.054223613" observedRunningTime="2025-12-03 09:31:19.576378539 +0000 UTC m=+1147.759270860" watchObservedRunningTime="2025-12-03 09:31:19.577572538 +0000 UTC m=+1147.760465189" Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.542310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7844d7bfd9-p972t" event={"ID":"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8","Type":"ContainerStarted","Data":"4631aef46a6ff619bdc62443f873bfebaa0f264d7a080c1c5fddedc8ac5722b8"} Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.546878 4856 generic.go:334] "Generic (PLEG): container finished" podID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerID="bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40" exitCode=143 Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.546950 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerDied","Data":"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40"} Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.551257 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerStarted","Data":"f07ea5d73c066c4c256636ba20729842231d25b3f549885cbb535d1542cedc15"} Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.872017 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f7df59b96-db6lp"] Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.873743 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.878736 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.881726 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 03 09:31:20 crc kubenswrapper[4856]: I1203 09:31:20.896521 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f7df59b96-db6lp"] Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data-custom\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017069 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7f85ba-81a1-4b35-8620-2c24b08b5101-logs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbhcw\" (UniqueName: \"kubernetes.io/projected/dc7f85ba-81a1-4b35-8620-2c24b08b5101-kube-api-access-jbhcw\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017154 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-internal-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-public-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.017299 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-combined-ca-bundle\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-combined-ca-bundle\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data-custom\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120359 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7f85ba-81a1-4b35-8620-2c24b08b5101-logs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120425 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbhcw\" (UniqueName: \"kubernetes.io/projected/dc7f85ba-81a1-4b35-8620-2c24b08b5101-kube-api-access-jbhcw\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-internal-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.120506 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-public-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.122743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7f85ba-81a1-4b35-8620-2c24b08b5101-logs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.129656 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-public-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.133322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-internal-tls-certs\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.135528 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data-custom\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.138922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-config-data\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.140447 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f85ba-81a1-4b35-8620-2c24b08b5101-combined-ca-bundle\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.148466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbhcw\" (UniqueName: \"kubernetes.io/projected/dc7f85ba-81a1-4b35-8620-2c24b08b5101-kube-api-access-jbhcw\") pod \"barbican-api-6f7df59b96-db6lp\" (UID: \"dc7f85ba-81a1-4b35-8620-2c24b08b5101\") " pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.206132 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.230906 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.327456 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.327883 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.327963 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4gkn\" (UniqueName: \"kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.328020 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.328090 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.328157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.328194 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs\") pod \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\" (UID: \"7cf71c3b-9863-4c05-84a1-ef3a684b3904\") " Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.329122 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs" (OuterVolumeSpecName: "logs") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.329756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.337955 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn" (OuterVolumeSpecName: "kube-api-access-z4gkn") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "kube-api-access-z4gkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.338023 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.344965 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts" (OuterVolumeSpecName: "scripts") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.411968 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data" (OuterVolumeSpecName: "config-data") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.426439 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cf71c3b-9863-4c05-84a1-ef3a684b3904" (UID: "7cf71c3b-9863-4c05-84a1-ef3a684b3904"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435634 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435684 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435693 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435706 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf71c3b-9863-4c05-84a1-ef3a684b3904-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435716 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf71c3b-9863-4c05-84a1-ef3a684b3904-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435725 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cf71c3b-9863-4c05-84a1-ef3a684b3904-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.435734 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4gkn\" (UniqueName: \"kubernetes.io/projected/7cf71c3b-9863-4c05-84a1-ef3a684b3904-kube-api-access-z4gkn\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.576957 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7844d7bfd9-p972t" event={"ID":"d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8","Type":"ContainerStarted","Data":"15216040325cc79a0f3a73c2392833c484caeaa33f401a4e5dccef2a7d227cc0"} Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.577307 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.581573 4856 generic.go:334] "Generic (PLEG): container finished" podID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerID="e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c" exitCode=0 Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.581610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerDied","Data":"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c"} Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.581637 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cf71c3b-9863-4c05-84a1-ef3a684b3904","Type":"ContainerDied","Data":"5b090a5eb54045c1d1fc16ce38cdd6d67301a92d7e8363d1347113f7eed57422"} Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.581654 4856 scope.go:117] "RemoveContainer" containerID="e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.581774 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.614159 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7844d7bfd9-p972t" podStartSLOduration=3.614129387 podStartE2EDuration="3.614129387s" podCreationTimestamp="2025-12-03 09:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:21.593864945 +0000 UTC m=+1149.776757256" watchObservedRunningTime="2025-12-03 09:31:21.614129387 +0000 UTC m=+1149.797021688" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.653582 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.669901 4856 scope.go:117] "RemoveContainer" containerID="bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.671768 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.681796 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:21 crc kubenswrapper[4856]: E1203 09:31:21.682341 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api-log" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.682368 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api-log" Dec 03 09:31:21 crc kubenswrapper[4856]: E1203 09:31:21.682390 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.682399 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.682615 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.682648 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" containerName="cinder-api-log" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.683975 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.698403 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.698649 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.698842 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.719165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.737265 4856 scope.go:117] "RemoveContainer" containerID="e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c" Dec 03 09:31:21 crc kubenswrapper[4856]: E1203 09:31:21.743928 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c\": container with ID starting with e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c not found: ID does not exist" containerID="e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.743986 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c"} err="failed to get container status \"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c\": rpc error: code = NotFound desc = could not find container \"e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c\": container with ID starting with e866511742548071987589c16f59d955f0285d1748c2ec0f6cfb2d948668a41c not found: ID does not exist" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.744012 4856 scope.go:117] "RemoveContainer" containerID="bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40" Dec 03 09:31:21 crc kubenswrapper[4856]: E1203 09:31:21.745606 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40\": container with ID starting with bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40 not found: ID does not exist" containerID="bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.745664 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40"} err="failed to get container status \"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40\": rpc error: code = NotFound desc = could not find container \"bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40\": container with ID starting with bb0b80ff328d4aa922a868bdd66af1517c531c01a2683170ea760eb68ed94e40 not found: ID does not exist" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.806338 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f7df59b96-db6lp"] Dec 03 09:31:21 crc kubenswrapper[4856]: W1203 09:31:21.810660 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7f85ba_81a1_4b35_8620_2c24b08b5101.slice/crio-bcc42f453499dcc34314fdf72ddf7259c3962f00cfda3a3ae06dfc3a2e6568b7 WatchSource:0}: Error finding container bcc42f453499dcc34314fdf72ddf7259c3962f00cfda3a3ae06dfc3a2e6568b7: Status 404 returned error can't find the container with id bcc42f453499dcc34314fdf72ddf7259c3962f00cfda3a3ae06dfc3a2e6568b7 Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846234 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846321 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846347 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846397 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-scripts\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jw9f\" (UniqueName: \"kubernetes.io/projected/34eb0c70-af06-4124-a1e5-fd6010205b6d-kube-api-access-6jw9f\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846509 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34eb0c70-af06-4124-a1e5-fd6010205b6d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34eb0c70-af06-4124-a1e5-fd6010205b6d-logs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.846659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.950921 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.950983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951060 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-scripts\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951131 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jw9f\" (UniqueName: \"kubernetes.io/projected/34eb0c70-af06-4124-a1e5-fd6010205b6d-kube-api-access-6jw9f\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34eb0c70-af06-4124-a1e5-fd6010205b6d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951221 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34eb0c70-af06-4124-a1e5-fd6010205b6d-logs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.951321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.952018 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34eb0c70-af06-4124-a1e5-fd6010205b6d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.958279 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34eb0c70-af06-4124-a1e5-fd6010205b6d-logs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.966466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.967443 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.967617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.968109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.974769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-scripts\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.975983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eb0c70-af06-4124-a1e5-fd6010205b6d-config-data\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:21 crc kubenswrapper[4856]: I1203 09:31:21.992575 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jw9f\" (UniqueName: \"kubernetes.io/projected/34eb0c70-af06-4124-a1e5-fd6010205b6d-kube-api-access-6jw9f\") pod \"cinder-api-0\" (UID: \"34eb0c70-af06-4124-a1e5-fd6010205b6d\") " pod="openstack/cinder-api-0" Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.011919 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.680569 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f7df59b96-db6lp" event={"ID":"dc7f85ba-81a1-4b35-8620-2c24b08b5101","Type":"ContainerStarted","Data":"a68fef5a7e37916a29b3312605e35bcc24f40b911ad414b6e0299bcd4b79be9f"} Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.681960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f7df59b96-db6lp" event={"ID":"dc7f85ba-81a1-4b35-8620-2c24b08b5101","Type":"ContainerStarted","Data":"bcc42f453499dcc34314fdf72ddf7259c3962f00cfda3a3ae06dfc3a2e6568b7"} Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.713881 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf71c3b-9863-4c05-84a1-ef3a684b3904" path="/var/lib/kubelet/pods/7cf71c3b-9863-4c05-84a1-ef3a684b3904/volumes" Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.762430 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.821632 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.8944182339999998 podStartE2EDuration="8.821597782s" podCreationTimestamp="2025-12-03 09:31:14 +0000 UTC" firstStartedPulling="2025-12-03 09:31:16.996041208 +0000 UTC m=+1145.178933499" lastFinishedPulling="2025-12-03 09:31:21.923220746 +0000 UTC m=+1150.106113047" observedRunningTime="2025-12-03 09:31:22.79677074 +0000 UTC m=+1150.979663041" watchObservedRunningTime="2025-12-03 09:31:22.821597782 +0000 UTC m=+1151.004490093" Dec 03 09:31:22 crc kubenswrapper[4856]: I1203 09:31:22.963514 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.118019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.342288 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.342559 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="dnsmasq-dns" containerID="cri-o://5ae544c6f69bc1cc6871772f869f660717a077be5989d3dff6c80edf8291f55f" gracePeriod=10 Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.827124 4856 generic.go:334] "Generic (PLEG): container finished" podID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerID="5ae544c6f69bc1cc6871772f869f660717a077be5989d3dff6c80edf8291f55f" exitCode=0 Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.827584 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerDied","Data":"5ae544c6f69bc1cc6871772f869f660717a077be5989d3dff6c80edf8291f55f"} Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.830359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34eb0c70-af06-4124-a1e5-fd6010205b6d","Type":"ContainerStarted","Data":"a2bfb2f50222347ff4bd5aad0458dd9bf15de99fc7b86cbb01c20382700f14e4"} Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.863268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f7df59b96-db6lp" event={"ID":"dc7f85ba-81a1-4b35-8620-2c24b08b5101","Type":"ContainerStarted","Data":"d3ede1779ea80cfda949c9211268ce496531bd09d894579f8aea99c81fe8c637"} Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.865666 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.865696 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.915962 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f7df59b96-db6lp" podStartSLOduration=3.915945083 podStartE2EDuration="3.915945083s" podCreationTimestamp="2025-12-03 09:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:23.91540799 +0000 UTC m=+1152.098300291" watchObservedRunningTime="2025-12-03 09:31:23.915945083 +0000 UTC m=+1152.098837384" Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.934761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerStarted","Data":"ef57f5337410de9421231ef5671f7950dce9686824b6251d50e5279441fe8cac"} Dec 03 09:31:23 crc kubenswrapper[4856]: I1203 09:31:23.936199 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.018039 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.150825 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.654031 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.848575 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.848746 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.848766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7qrq\" (UniqueName: \"kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.848906 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.848971 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.849053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb\") pod \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\" (UID: \"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2\") " Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.896400 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq" (OuterVolumeSpecName: "kube-api-access-v7qrq") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "kube-api-access-v7qrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.958495 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7qrq\" (UniqueName: \"kubernetes.io/projected/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-kube-api-access-v7qrq\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.976093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" event={"ID":"f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2","Type":"ContainerDied","Data":"06955b4c4b9948a0587778e0c3499b0b02489ee960190ae3aa936923a2677590"} Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.976163 4856 scope.go:117] "RemoveContainer" containerID="5ae544c6f69bc1cc6871772f869f660717a077be5989d3dff6c80edf8291f55f" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.976369 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kv9q8" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.976539 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.983689 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="cinder-scheduler" containerID="cri-o://8eee48455182d87ac46dd75dc4db1bae60c7edc2cf8c1e15b2f893b3b35951ad" gracePeriod=30 Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.983969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34eb0c70-af06-4124-a1e5-fd6010205b6d","Type":"ContainerStarted","Data":"16d0d85edc08676d9665adea643c71117e99925eb731f2c09db0329ed42d4aa7"} Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.983989 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="probe" containerID="cri-o://ce9328ea250e9d9f0906256aabef49f7b136f1f874777f4f10f88992d48558a2" gracePeriod=30 Dec 03 09:31:24 crc kubenswrapper[4856]: I1203 09:31:24.994215 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.031448 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.128227 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.128251 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.128263 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.150981 4856 scope.go:117] "RemoveContainer" containerID="d0335f91b1ee267925ae355a1f6d718a63693670100435c4fe9b64fbadcdae19" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.203860 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.209220 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config" (OuterVolumeSpecName: "config") pod "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" (UID: "f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.230408 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.230456 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.355009 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.365889 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kv9q8"] Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.419350 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.435073 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-549d5987fb-kphsk" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.591174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.634865 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-777b75cf48-68qq9" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.635029 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.642913 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-866db7fbbf-khsgj" Dec 03 09:31:25 crc kubenswrapper[4856]: I1203 09:31:25.927324 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77564f8754-gb7wv" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 09:31:26 crc kubenswrapper[4856]: I1203 09:31:26.002928 4856 generic.go:334] "Generic (PLEG): container finished" podID="9d879170-7847-458b-87aa-dfc1de37383b" containerID="ce9328ea250e9d9f0906256aabef49f7b136f1f874777f4f10f88992d48558a2" exitCode=0 Dec 03 09:31:26 crc kubenswrapper[4856]: I1203 09:31:26.003036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerDied","Data":"ce9328ea250e9d9f0906256aabef49f7b136f1f874777f4f10f88992d48558a2"} Dec 03 09:31:26 crc kubenswrapper[4856]: I1203 09:31:26.705174 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" path="/var/lib/kubelet/pods/f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2/volumes" Dec 03 09:31:27 crc kubenswrapper[4856]: I1203 09:31:27.053281 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34eb0c70-af06-4124-a1e5-fd6010205b6d","Type":"ContainerStarted","Data":"99b62ec75d7b2f8bff2d74007df3e1c4f6d75922d003b08a954dad71a0fd27af"} Dec 03 09:31:27 crc kubenswrapper[4856]: I1203 09:31:27.053486 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 09:31:27 crc kubenswrapper[4856]: I1203 09:31:27.084902 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.084883134 podStartE2EDuration="6.084883134s" podCreationTimestamp="2025-12-03 09:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:27.080546112 +0000 UTC m=+1155.263438403" watchObservedRunningTime="2025-12-03 09:31:27.084883134 +0000 UTC m=+1155.267775435" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.116715 4856 generic.go:334] "Generic (PLEG): container finished" podID="9d879170-7847-458b-87aa-dfc1de37383b" containerID="8eee48455182d87ac46dd75dc4db1bae60c7edc2cf8c1e15b2f893b3b35951ad" exitCode=0 Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.116958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerDied","Data":"8eee48455182d87ac46dd75dc4db1bae60c7edc2cf8c1e15b2f893b3b35951ad"} Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.377931 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.528941 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.529132 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.529281 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.529359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdl7c\" (UniqueName: \"kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.530325 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.530415 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.530523 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data\") pod \"9d879170-7847-458b-87aa-dfc1de37383b\" (UID: \"9d879170-7847-458b-87aa-dfc1de37383b\") " Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.531266 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d879170-7847-458b-87aa-dfc1de37383b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.539194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.539390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c" (OuterVolumeSpecName: "kube-api-access-sdl7c") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "kube-api-access-sdl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.559028 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts" (OuterVolumeSpecName: "scripts") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.618068 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.642711 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.642757 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.642768 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.642782 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdl7c\" (UniqueName: \"kubernetes.io/projected/9d879170-7847-458b-87aa-dfc1de37383b-kube-api-access-sdl7c\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.717583 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data" (OuterVolumeSpecName: "config-data") pod "9d879170-7847-458b-87aa-dfc1de37383b" (UID: "9d879170-7847-458b-87aa-dfc1de37383b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.718565 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:28 crc kubenswrapper[4856]: I1203 09:31:28.744427 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d879170-7847-458b-87aa-dfc1de37383b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.051316 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.052080 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="probe" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052100 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="probe" Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.052118 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="init" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052125 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="init" Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.052138 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="cinder-scheduler" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052148 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="cinder-scheduler" Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.052176 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="dnsmasq-dns" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052181 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="dnsmasq-dns" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052356 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55b69ed-bb1f-4d5a-9b0f-dbee9d8c29a2" containerName="dnsmasq-dns" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052369 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="cinder-scheduler" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.052384 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d879170-7847-458b-87aa-dfc1de37383b" containerName="probe" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.053079 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.055949 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.057269 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.061952 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6s77t" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.087016 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.131610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d879170-7847-458b-87aa-dfc1de37383b","Type":"ContainerDied","Data":"835a4a147223c5995563b2b6fbb89c371df0cb3ab154ff9078211cc39f9fb5a1"} Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.131675 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.131690 4856 scope.go:117] "RemoveContainer" containerID="ce9328ea250e9d9f0906256aabef49f7b136f1f874777f4f10f88992d48558a2" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.153784 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.153917 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.153972 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj6v9\" (UniqueName: \"kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.154148 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.161971 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.162080 4856 scope.go:117] "RemoveContainer" containerID="8eee48455182d87ac46dd75dc4db1bae60c7edc2cf8c1e15b2f893b3b35951ad" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.177677 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.185190 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.186641 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.190821 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.204211 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.256754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.256877 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj6v9\" (UniqueName: \"kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.257052 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.257176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.258166 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.262287 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.264615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.278407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj6v9\" (UniqueName: \"kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9\") pod \"openstackclient\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359121 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.359455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxs5n\" (UniqueName: \"kubernetes.io/projected/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-kube-api-access-xxs5n\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.376843 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.436556 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.448871 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.465992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.466097 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.466152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.466276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.466338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.466368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxs5n\" (UniqueName: \"kubernetes.io/projected/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-kube-api-access-xxs5n\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.473120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.480514 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.486590 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.490437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.493012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.496020 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.499359 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.502315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxs5n\" (UniqueName: \"kubernetes.io/projected/e3e3b449-8b8e-497a-bccc-c2aa4c81861d-kube-api-access-xxs5n\") pod \"cinder-scheduler-0\" (UID: \"e3e3b449-8b8e-497a-bccc-c2aa4c81861d\") " pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.508666 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.512156 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.598411 4856 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 03 09:31:29 crc kubenswrapper[4856]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_32b558a8-9518-4ad5-820f-ce1fcd702325_0(e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4" Netns:"/var/run/netns/aa8d3675-9fec-4a5f-b091-fb2fdb6474ab" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4;K8S_POD_UID=32b558a8-9518-4ad5-820f-ce1fcd702325" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/32b558a8-9518-4ad5-820f-ce1fcd702325]: expected pod UID "32b558a8-9518-4ad5-820f-ce1fcd702325" but got "b26ba3fd-c881-44a2-a613-17d2ee4da042" from Kube API Dec 03 09:31:29 crc kubenswrapper[4856]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 09:31:29 crc kubenswrapper[4856]: > Dec 03 09:31:29 crc kubenswrapper[4856]: E1203 09:31:29.598854 4856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 03 09:31:29 crc kubenswrapper[4856]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_32b558a8-9518-4ad5-820f-ce1fcd702325_0(e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4" Netns:"/var/run/netns/aa8d3675-9fec-4a5f-b091-fb2fdb6474ab" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=e53284b454a0d78a0d89d0e5648a0b98e5e11e7e404c61852cc661da34b81cc4;K8S_POD_UID=32b558a8-9518-4ad5-820f-ce1fcd702325" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/32b558a8-9518-4ad5-820f-ce1fcd702325]: expected pod UID "32b558a8-9518-4ad5-820f-ce1fcd702325" but got "b26ba3fd-c881-44a2-a613-17d2ee4da042" from Kube API Dec 03 09:31:29 crc kubenswrapper[4856]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 03 09:31:29 crc kubenswrapper[4856]: > pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.670152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.670213 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5xj\" (UniqueName: \"kubernetes.io/projected/b26ba3fd-c881-44a2-a613-17d2ee4da042-kube-api-access-df5xj\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.670406 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config-secret\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.670463 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.773153 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config-secret\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.773247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.773298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.773329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5xj\" (UniqueName: \"kubernetes.io/projected/b26ba3fd-c881-44a2-a613-17d2ee4da042-kube-api-access-df5xj\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.775129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.783635 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-openstack-config-secret\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.798564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ba3fd-c881-44a2-a613-17d2ee4da042-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:29 crc kubenswrapper[4856]: I1203 09:31:29.802582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5xj\" (UniqueName: \"kubernetes.io/projected/b26ba3fd-c881-44a2-a613-17d2ee4da042-kube-api-access-df5xj\") pod \"openstackclient\" (UID: \"b26ba3fd-c881-44a2-a613-17d2ee4da042\") " pod="openstack/openstackclient" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.008938 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.131357 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.163787 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.172372 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="32b558a8-9518-4ad5-820f-ce1fcd702325" podUID="b26ba3fd-c881-44a2-a613-17d2ee4da042" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.190982 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.284562 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret\") pod \"32b558a8-9518-4ad5-820f-ce1fcd702325\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.284635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle\") pod \"32b558a8-9518-4ad5-820f-ce1fcd702325\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.284733 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj6v9\" (UniqueName: \"kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9\") pod \"32b558a8-9518-4ad5-820f-ce1fcd702325\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.284910 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config\") pod \"32b558a8-9518-4ad5-820f-ce1fcd702325\" (UID: \"32b558a8-9518-4ad5-820f-ce1fcd702325\") " Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.286156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "32b558a8-9518-4ad5-820f-ce1fcd702325" (UID: "32b558a8-9518-4ad5-820f-ce1fcd702325"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.294969 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "32b558a8-9518-4ad5-820f-ce1fcd702325" (UID: "32b558a8-9518-4ad5-820f-ce1fcd702325"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.295023 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32b558a8-9518-4ad5-820f-ce1fcd702325" (UID: "32b558a8-9518-4ad5-820f-ce1fcd702325"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.302469 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9" (OuterVolumeSpecName: "kube-api-access-fj6v9") pod "32b558a8-9518-4ad5-820f-ce1fcd702325" (UID: "32b558a8-9518-4ad5-820f-ce1fcd702325"). InnerVolumeSpecName "kube-api-access-fj6v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.389751 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.389833 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.389851 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b558a8-9518-4ad5-820f-ce1fcd702325-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.389863 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj6v9\" (UniqueName: \"kubernetes.io/projected/32b558a8-9518-4ad5-820f-ce1fcd702325-kube-api-access-fj6v9\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.665074 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.748201 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b558a8-9518-4ad5-820f-ce1fcd702325" path="/var/lib/kubelet/pods/32b558a8-9518-4ad5-820f-ce1fcd702325/volumes" Dec 03 09:31:30 crc kubenswrapper[4856]: I1203 09:31:30.748589 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d879170-7847-458b-87aa-dfc1de37383b" path="/var/lib/kubelet/pods/9d879170-7847-458b-87aa-dfc1de37383b/volumes" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.187578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b26ba3fd-c881-44a2-a613-17d2ee4da042","Type":"ContainerStarted","Data":"14e530415fdac660a07666077f21eb2abf272b36863e5807b85aa11ac6b8eff8"} Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.191192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3e3b449-8b8e-497a-bccc-c2aa4c81861d","Type":"ContainerStarted","Data":"6b8520153f48826fae8ca6b6b5ed11e544882f52969a6b48164d13780aedd3fb"} Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.195427 4856 generic.go:334] "Generic (PLEG): container finished" podID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerID="8e8d8d0e4f752b03d3faa4fb027bc5872ed5ce85678c10a5a108fa2cb35e40a3" exitCode=137 Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.195599 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.195778 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerDied","Data":"8e8d8d0e4f752b03d3faa4fb027bc5872ed5ce85678c10a5a108fa2cb35e40a3"} Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.207360 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="32b558a8-9518-4ad5-820f-ce1fcd702325" podUID="b26ba3fd-c881-44a2-a613-17d2ee4da042" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.530450 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.653712 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.653776 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtlh\" (UniqueName: \"kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.653872 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.653904 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.654045 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.654121 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.654220 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs\") pod \"bdf3f133-b175-4a03-9518-91a5bb351c07\" (UID: \"bdf3f133-b175-4a03-9518-91a5bb351c07\") " Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.655296 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs" (OuterVolumeSpecName: "logs") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.662007 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.669027 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh" (OuterVolumeSpecName: "kube-api-access-hjtlh") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "kube-api-access-hjtlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.717780 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data" (OuterVolumeSpecName: "config-data") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.717900 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.748430 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts" (OuterVolumeSpecName: "scripts") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756072 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756102 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf3f133-b175-4a03-9518-91a5bb351c07-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756111 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756128 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtlh\" (UniqueName: \"kubernetes.io/projected/bdf3f133-b175-4a03-9518-91a5bb351c07-kube-api-access-hjtlh\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756138 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.756145 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf3f133-b175-4a03-9518-91a5bb351c07-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.775123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bdf3f133-b175-4a03-9518-91a5bb351c07" (UID: "bdf3f133-b175-4a03-9518-91a5bb351c07"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:31 crc kubenswrapper[4856]: I1203 09:31:31.867949 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf3f133-b175-4a03-9518-91a5bb351c07-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.228075 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3e3b449-8b8e-497a-bccc-c2aa4c81861d","Type":"ContainerStarted","Data":"35b72580a1d8859706345d139deac8227ea611e227900e0ffcf6efe7fa3b84b3"} Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.248014 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-777b75cf48-68qq9" event={"ID":"bdf3f133-b175-4a03-9518-91a5bb351c07","Type":"ContainerDied","Data":"4dc1949bf505413846a5690e208c31f7984872aa4e454740225ba7b1c1d2778c"} Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.248575 4856 scope.go:117] "RemoveContainer" containerID="279dc7b9f3a417f2f052c9b05e4582beabe488060296a80e641c58f25f734338" Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.248755 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-777b75cf48-68qq9" Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.295438 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.325077 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-777b75cf48-68qq9"] Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.442275 4856 scope.go:117] "RemoveContainer" containerID="8e8d8d0e4f752b03d3faa4fb027bc5872ed5ce85678c10a5a108fa2cb35e40a3" Dec 03 09:31:32 crc kubenswrapper[4856]: I1203 09:31:32.703251 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" path="/var/lib/kubelet/pods/bdf3f133-b175-4a03-9518-91a5bb351c07/volumes" Dec 03 09:31:33 crc kubenswrapper[4856]: I1203 09:31:33.264348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3e3b449-8b8e-497a-bccc-c2aa4c81861d","Type":"ContainerStarted","Data":"8235b944dd0fbc13de14f148f2d1e15540001b637cbe38a1670f85be42c519b8"} Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.481111 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.516653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.521005 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.520970303 podStartE2EDuration="5.520970303s" podCreationTimestamp="2025-12-03 09:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:33.295032781 +0000 UTC m=+1161.477925082" watchObservedRunningTime="2025-12-03 09:31:34.520970303 +0000 UTC m=+1162.703862604" Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.795113 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f7df59b96-db6lp" Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.882822 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.883160 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77564f8754-gb7wv" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" containerID="cri-o://0290249b09cb99f4ed3bc6392610beaf97a267b4c619c39526d4742267bddb1a" gracePeriod=30 Dec 03 09:31:34 crc kubenswrapper[4856]: I1203 09:31:34.883359 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77564f8754-gb7wv" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api" containerID="cri-o://bcf5134c16011651b10da761e1d3f90b81db0cbda437b67e77402c5be32bc05b" gracePeriod=30 Dec 03 09:31:35 crc kubenswrapper[4856]: I1203 09:31:35.153397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 03 09:31:35 crc kubenswrapper[4856]: I1203 09:31:35.497393 4856 generic.go:334] "Generic (PLEG): container finished" podID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerID="0290249b09cb99f4ed3bc6392610beaf97a267b4c619c39526d4742267bddb1a" exitCode=143 Dec 03 09:31:35 crc kubenswrapper[4856]: I1203 09:31:35.498495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerDied","Data":"0290249b09cb99f4ed3bc6392610beaf97a267b4c619c39526d4742267bddb1a"} Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.210911 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77564f8754-gb7wv" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:48724->10.217.0.157:9311: read: connection reset by peer" Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.210935 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77564f8754-gb7wv" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:48720->10.217.0.157:9311: read: connection reset by peer" Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.555562 4856 generic.go:334] "Generic (PLEG): container finished" podID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerID="bcf5134c16011651b10da761e1d3f90b81db0cbda437b67e77402c5be32bc05b" exitCode=0 Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.555641 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerDied","Data":"bcf5134c16011651b10da761e1d3f90b81db0cbda437b67e77402c5be32bc05b"} Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.849944 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.958522 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle\") pod \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.958602 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data\") pod \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.958695 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp\") pod \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.958783 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom\") pod \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.958973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs\") pod \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\" (UID: \"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721\") " Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.960980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs" (OuterVolumeSpecName: "logs") pod "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" (UID: "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.984114 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" (UID: "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:38 crc kubenswrapper[4856]: I1203 09:31:38.984300 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp" (OuterVolumeSpecName: "kube-api-access-gghfp") pod "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" (UID: "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721"). InnerVolumeSpecName "kube-api-access-gghfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.021919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" (UID: "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.036726 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data" (OuterVolumeSpecName: "config-data") pod "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" (UID: "5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.063197 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.063243 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.063260 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.063276 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-kube-api-access-gghfp\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.063289 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.250933 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d5fb5d859-8njp2"] Dec 03 09:31:39 crc kubenswrapper[4856]: E1203 09:31:39.252334 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252361 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api" Dec 03 09:31:39 crc kubenswrapper[4856]: E1203 09:31:39.252390 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252400 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" Dec 03 09:31:39 crc kubenswrapper[4856]: E1203 09:31:39.252436 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252446 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" Dec 03 09:31:39 crc kubenswrapper[4856]: E1203 09:31:39.252462 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon-log" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252470 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon-log" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252737 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252773 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api-log" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252791 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" containerName="barbican-api" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.252801 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf3f133-b175-4a03-9518-91a5bb351c07" containerName="horizon-log" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.254597 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.259479 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.259540 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.259481 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.279874 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d5fb5d859-8njp2"] Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.371645 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrcz\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-kube-api-access-vsrcz\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372287 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-combined-ca-bundle\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372340 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-config-data\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372422 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-internal-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372454 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-public-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372484 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-etc-swift\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372581 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-log-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.372713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-run-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.475887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-config-data\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.475974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-internal-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-public-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476042 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-etc-swift\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-log-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476147 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-run-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrcz\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-kube-api-access-vsrcz\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.476250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-combined-ca-bundle\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.477259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-run-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.477328 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295f1863-c8b3-4e9a-b09f-24c393ac167c-log-httpd\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.486350 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-public-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.491697 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-internal-tls-certs\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.492087 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-config-data\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.496309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-etc-swift\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.497165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295f1863-c8b3-4e9a-b09f-24c393ac167c-combined-ca-bundle\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.503437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrcz\" (UniqueName: \"kubernetes.io/projected/295f1863-c8b3-4e9a-b09f-24c393ac167c-kube-api-access-vsrcz\") pod \"swift-proxy-6d5fb5d859-8njp2\" (UID: \"295f1863-c8b3-4e9a-b09f-24c393ac167c\") " pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.577051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77564f8754-gb7wv" event={"ID":"5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721","Type":"ContainerDied","Data":"4f846bde358e5f6a8287bcbe548e3cb0622f47e869d3f9aa1ff568df334697c7"} Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.577138 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77564f8754-gb7wv" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.577155 4856 scope.go:117] "RemoveContainer" containerID="bcf5134c16011651b10da761e1d3f90b81db0cbda437b67e77402c5be32bc05b" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.579330 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.621841 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.637256 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-77564f8754-gb7wv"] Dec 03 09:31:39 crc kubenswrapper[4856]: I1203 09:31:39.873661 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 09:31:40 crc kubenswrapper[4856]: I1203 09:31:40.708543 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721" path="/var/lib/kubelet/pods/5dac2fd7-4b1e-4fb3-aa99-f0924f2e3721/volumes" Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.539381 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.540043 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-central-agent" containerID="cri-o://a57f8b98c6967f5209e25e844848a1aaa26e2e7f3fddc604b1cabaa765716db3" gracePeriod=30 Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.540469 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-notification-agent" containerID="cri-o://36693f9fbff707ce89e1c2f425e618ba88c228c705b50f4de9449adf5191ea68" gracePeriod=30 Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.540564 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="sg-core" containerID="cri-o://f07ea5d73c066c4c256636ba20729842231d25b3f549885cbb535d1542cedc15" gracePeriod=30 Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.540604 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="proxy-httpd" containerID="cri-o://ef57f5337410de9421231ef5671f7950dce9686824b6251d50e5279441fe8cac" gracePeriod=30 Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.550025 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 09:31:41 crc kubenswrapper[4856]: I1203 09:31:41.768437 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.644483 4856 generic.go:334] "Generic (PLEG): container finished" podID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerID="ef57f5337410de9421231ef5671f7950dce9686824b6251d50e5279441fe8cac" exitCode=0 Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.645171 4856 generic.go:334] "Generic (PLEG): container finished" podID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerID="f07ea5d73c066c4c256636ba20729842231d25b3f549885cbb535d1542cedc15" exitCode=2 Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.645189 4856 generic.go:334] "Generic (PLEG): container finished" podID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerID="a57f8b98c6967f5209e25e844848a1aaa26e2e7f3fddc604b1cabaa765716db3" exitCode=0 Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.644873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerDied","Data":"ef57f5337410de9421231ef5671f7950dce9686824b6251d50e5279441fe8cac"} Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.645254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerDied","Data":"f07ea5d73c066c4c256636ba20729842231d25b3f549885cbb535d1542cedc15"} Dec 03 09:31:42 crc kubenswrapper[4856]: I1203 09:31:42.645267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerDied","Data":"a57f8b98c6967f5209e25e844848a1aaa26e2e7f3fddc604b1cabaa765716db3"} Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.424043 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.424602 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-log" containerID="cri-o://f234e2541e2d33cc19aea176a313b04f5fa6e6eb6d1b6b538a8a56bad08b2dcc" gracePeriod=30 Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.425033 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-httpd" containerID="cri-o://bc03f895c9b268026f2a3a0190885e0efd066488cf22308e6caf642a17ec7893" gracePeriod=30 Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.677758 4856 generic.go:334] "Generic (PLEG): container finished" podID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerID="f234e2541e2d33cc19aea176a313b04f5fa6e6eb6d1b6b538a8a56bad08b2dcc" exitCode=143 Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.677851 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerDied","Data":"f234e2541e2d33cc19aea176a313b04f5fa6e6eb6d1b6b538a8a56bad08b2dcc"} Dec 03 09:31:44 crc kubenswrapper[4856]: I1203 09:31:44.888655 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": dial tcp 10.217.0.163:3000: connect: connection refused" Dec 03 09:31:45 crc kubenswrapper[4856]: I1203 09:31:45.697779 4856 generic.go:334] "Generic (PLEG): container finished" podID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerID="36693f9fbff707ce89e1c2f425e618ba88c228c705b50f4de9449adf5191ea68" exitCode=0 Dec 03 09:31:45 crc kubenswrapper[4856]: I1203 09:31:45.697853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerDied","Data":"36693f9fbff707ce89e1c2f425e618ba88c228c705b50f4de9449adf5191ea68"} Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.093428 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.093799 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-log" containerID="cri-o://f891e6da20b5529fa7af17674aa6ff33ba243d96d6d0cbc4b41365b420825fe8" gracePeriod=30 Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.093898 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-httpd" containerID="cri-o://bb969a2b5253a869cf863f56934a31d538d953f54401127ef10cf3677e4e7498" gracePeriod=30 Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.481513 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wqjjd"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.483255 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.503967 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wqjjd"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.606537 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-482j6"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.608499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.617791 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-482j6"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.626660 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f837-account-create-update-7xhll"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.628481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.633042 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.642514 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f837-account-create-update-7xhll"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.687413 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.687517 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vn5l\" (UniqueName: \"kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.687569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.687643 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.687763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8pj\" (UniqueName: \"kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.717580 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c5ktp"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.719309 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.723884 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c5ktp"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.731851 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerDied","Data":"f891e6da20b5529fa7af17674aa6ff33ba243d96d6d0cbc4b41365b420825fe8"} Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.732161 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerID="f891e6da20b5529fa7af17674aa6ff33ba243d96d6d0cbc4b41365b420825fe8" exitCode=143 Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.794647 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795029 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-72e7-account-create-update-w8vbn"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795615 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8pj\" (UniqueName: \"kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795750 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795784 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfbp\" (UniqueName: \"kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.795917 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.796003 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vn5l\" (UniqueName: \"kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.796647 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.797338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.798713 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.811823 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-72e7-account-create-update-w8vbn"] Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.817594 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.843483 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vn5l\" (UniqueName: \"kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l\") pod \"nova-api-db-create-wqjjd\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.844317 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8pj\" (UniqueName: \"kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj\") pod \"nova-api-f837-account-create-update-7xhll\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.898664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfbp\" (UniqueName: \"kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.900594 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbprd\" (UniqueName: \"kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.900778 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.965858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfbp\" (UniqueName: \"kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp\") pod \"nova-cell0-db-create-482j6\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.974395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:46 crc kubenswrapper[4856]: I1203 09:31:46.993276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.006773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.006942 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x5qp\" (UniqueName: \"kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.007108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.007176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbprd\" (UniqueName: \"kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.008114 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.104226 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbprd\" (UniqueName: \"kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd\") pod \"nova-cell1-db-create-c5ktp\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.117404 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.117577 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x5qp\" (UniqueName: \"kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.123668 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.124192 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.176619 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-36d7-account-create-update-5hhw7"] Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.179672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.202517 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.205762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x5qp\" (UniqueName: \"kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp\") pod \"nova-cell0-72e7-account-create-update-w8vbn\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.218390 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9lq\" (UniqueName: \"kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.218493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.229577 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-36d7-account-create-update-5hhw7"] Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.331326 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9lq\" (UniqueName: \"kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.331757 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.332553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.353951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9lq\" (UniqueName: \"kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq\") pod \"nova-cell1-36d7-account-create-update-5hhw7\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.354372 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.359400 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.551279 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.748914 4856 generic.go:334] "Generic (PLEG): container finished" podID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerID="bc03f895c9b268026f2a3a0190885e0efd066488cf22308e6caf642a17ec7893" exitCode=0 Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.750132 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerDied","Data":"bc03f895c9b268026f2a3a0190885e0efd066488cf22308e6caf642a17ec7893"} Dec 03 09:31:47 crc kubenswrapper[4856]: I1203 09:31:47.886231 4856 scope.go:117] "RemoveContainer" containerID="0290249b09cb99f4ed3bc6392610beaf97a267b4c619c39526d4742267bddb1a" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.466389 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573247 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573358 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573763 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573843 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbf4v\" (UniqueName: \"kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.573997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml\") pod \"4292f186-8667-43a4-90bb-1f5202e3d7c7\" (UID: \"4292f186-8667-43a4-90bb-1f5202e3d7c7\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.574624 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.574952 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.574975 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.635828 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v" (OuterVolumeSpecName: "kube-api-access-zbf4v") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "kube-api-access-zbf4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.636506 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts" (OuterVolumeSpecName: "scripts") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.677944 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4292f186-8667-43a4-90bb-1f5202e3d7c7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.677968 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.677977 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbf4v\" (UniqueName: \"kubernetes.io/projected/4292f186-8667-43a4-90bb-1f5202e3d7c7-kube-api-access-zbf4v\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.710151 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.778816 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data" (OuterVolumeSpecName: "config-data") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.780366 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.781132 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.781161 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.807541 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.808982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4292f186-8667-43a4-90bb-1f5202e3d7c7" (UID: "4292f186-8667-43a4-90bb-1f5202e3d7c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.854346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3","Type":"ContainerDied","Data":"fa11ac341f052b9f97207e1a9e81d45f6e16407d4519ff67e7cdbaf6bc5a9a0e"} Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.854476 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b26ba3fd-c881-44a2-a613-17d2ee4da042","Type":"ContainerStarted","Data":"428a0172a21eaea9842255618bfb6322c2b4e46a90ab8a09ee3e74ce54fa4041"} Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.854500 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4292f186-8667-43a4-90bb-1f5202e3d7c7","Type":"ContainerDied","Data":"5d1fa9813713b0237856a148ac8744f3afee6fda4e545472f3f4e9ec9309ccda"} Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.854558 4856 scope.go:117] "RemoveContainer" containerID="bc03f895c9b268026f2a3a0190885e0efd066488cf22308e6caf642a17ec7893" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.884375 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs" (OuterVolumeSpecName: "logs") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.884518 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.884696 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.898378 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7844d7bfd9-p972t" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.899172 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.629672634 podStartE2EDuration="19.899138998s" podCreationTimestamp="2025-12-03 09:31:29 +0000 UTC" firstStartedPulling="2025-12-03 09:31:30.718843589 +0000 UTC m=+1158.901735920" lastFinishedPulling="2025-12-03 09:31:47.988309973 +0000 UTC m=+1176.171202284" observedRunningTime="2025-12-03 09:31:48.875359599 +0000 UTC m=+1177.058251890" watchObservedRunningTime="2025-12-03 09:31:48.899138998 +0000 UTC m=+1177.082031299" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.915841 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920468 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920857 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.920928 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xmbx\" (UniqueName: \"kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx\") pod \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\" (UID: \"1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3\") " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.921942 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.922487 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4292f186-8667-43a4-90bb-1f5202e3d7c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.922932 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.922963 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 09:31:48 crc kubenswrapper[4856]: I1203 09:31:48.922982 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.002984 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx" (OuterVolumeSpecName: "kube-api-access-8xmbx") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "kube-api-access-8xmbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.012601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts" (OuterVolumeSpecName: "scripts") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.028633 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.028683 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xmbx\" (UniqueName: \"kubernetes.io/projected/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-kube-api-access-8xmbx\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.067495 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.077987 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.105790 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.106090 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fd85d868-sh6bv" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-api" containerID="cri-o://e112b0ed988513a47362d5c192e3a59fda82b7af151a23e36864a8eb36d0c0fd" gracePeriod=30 Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.106656 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fd85d868-sh6bv" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-httpd" containerID="cri-o://89de9afc15304197d4096c2102f77a838bc3f096ae738c480b0548a5b97fd708" gracePeriod=30 Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.135475 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.136118 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.144028 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data" (OuterVolumeSpecName: "config-data") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.156866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" (UID: "1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.191403 4856 scope.go:117] "RemoveContainer" containerID="f234e2541e2d33cc19aea176a313b04f5fa6e6eb6d1b6b538a8a56bad08b2dcc" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.234410 4856 scope.go:117] "RemoveContainer" containerID="ef57f5337410de9421231ef5671f7950dce9686824b6251d50e5279441fe8cac" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.235642 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.238229 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.238255 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.246284 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.258610 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259234 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-central-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259260 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-central-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259290 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="proxy-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259299 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="proxy-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259320 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259327 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259345 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-notification-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259352 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-notification-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259363 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="sg-core" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259369 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="sg-core" Dec 03 09:31:49 crc kubenswrapper[4856]: E1203 09:31:49.259393 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-log" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259399 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-log" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259637 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="sg-core" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259654 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-log" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259679 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259690 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="proxy-httpd" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259705 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-notification-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.259714 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" containerName="ceilometer-central-agent" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.262036 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.265706 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.269613 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.273817 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.281184 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:42002->10.217.0.150:9292: read: connection reset by peer" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.281793 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:42016->10.217.0.150:9292: read: connection reset by peer" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.300228 4856 scope.go:117] "RemoveContainer" containerID="f07ea5d73c066c4c256636ba20729842231d25b3f549885cbb535d1542cedc15" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.341854 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.341986 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.342081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.342134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.342302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7752z\" (UniqueName: \"kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.342493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.342525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.366751 4856 scope.go:117] "RemoveContainer" containerID="36693f9fbff707ce89e1c2f425e618ba88c228c705b50f4de9449adf5191ea68" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.444793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445432 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7752z\" (UniqueName: \"kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445476 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445504 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.445676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.448615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.449583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.453211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.453355 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.455374 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.455417 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.461693 4856 scope.go:117] "RemoveContainer" containerID="a57f8b98c6967f5209e25e844848a1aaa26e2e7f3fddc604b1cabaa765716db3" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.475758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7752z\" (UniqueName: \"kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z\") pod \"ceilometer-0\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.594429 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.596487 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wqjjd"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.645326 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f837-account-create-update-7xhll"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.703323 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-72e7-account-create-update-w8vbn"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.722633 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-482j6"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.733051 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c5ktp"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.787134 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d5fb5d859-8njp2"] Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.816400 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-36d7-account-create-update-5hhw7"] Dec 03 09:31:49 crc kubenswrapper[4856]: W1203 09:31:49.927972 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod295f1863_c8b3_4e9a_b09f_24c393ac167c.slice/crio-016da843926d15d2700e5fb297ce36315c8d584a772771daa8a3919e2968471f WatchSource:0}: Error finding container 016da843926d15d2700e5fb297ce36315c8d584a772771daa8a3919e2968471f: Status 404 returned error can't find the container with id 016da843926d15d2700e5fb297ce36315c8d584a772771daa8a3919e2968471f Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.928117 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqjjd" event={"ID":"2927120f-ce3e-4ca6-8522-80b99afcdcc8","Type":"ContainerStarted","Data":"1a88528a45811235afb865447514a1b8229b9518ee16c55c33f4f8289cfc5231"} Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.936329 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-482j6" event={"ID":"3ea33628-dc6c-486d-8214-0c17593c5c65","Type":"ContainerStarted","Data":"a8f5640a54d0bd03136777511cbeccd00ec89c8d3e437feff5edcf4ac46f8745"} Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.958767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5ktp" event={"ID":"d1e53d88-906a-49d5-8687-bac531c74375","Type":"ContainerStarted","Data":"e3189b917603199d085a4df1d3d65cd828cd2bd7e6994679c8416e5554c51fd9"} Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.977456 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerID="bb969a2b5253a869cf863f56934a31d538d953f54401127ef10cf3677e4e7498" exitCode=0 Dec 03 09:31:49 crc kubenswrapper[4856]: I1203 09:31:49.977525 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerDied","Data":"bb969a2b5253a869cf863f56934a31d538d953f54401127ef10cf3677e4e7498"} Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.000412 4856 generic.go:334] "Generic (PLEG): container finished" podID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerID="89de9afc15304197d4096c2102f77a838bc3f096ae738c480b0548a5b97fd708" exitCode=0 Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.000515 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerDied","Data":"89de9afc15304197d4096c2102f77a838bc3f096ae738c480b0548a5b97fd708"} Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.004480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f837-account-create-update-7xhll" event={"ID":"efc2b70e-a05b-4d56-87f6-2656e84d9a77","Type":"ContainerStarted","Data":"a63bf3a5dfb4e6a7c5bca069d247704dacd1d70fd3935ec626025ba3b61e0965"} Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.018512 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" event={"ID":"037e7d3b-3523-4406-b7d6-39dc9c9256c3","Type":"ContainerStarted","Data":"1197dd413f88ac18457e94f0f481859c62eef0ab7e3bdf1c4e3d0144fbb38113"} Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.043881 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.181241 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.221174 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.270821 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.272640 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.275071 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.275991 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.290893 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.295952 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.369346 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.389267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.389546 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7q7\" (UniqueName: \"kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.389730 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.389868 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.389990 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390094 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390353 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data\") pod \"3ea94df7-893b-4dde-997a-72d9453f9ff8\" (UID: \"3ea94df7-893b-4dde-997a-72d9453f9ff8\") " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390862 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.390943 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.391034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.391153 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.391380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.391480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.391574 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvjb\" (UniqueName: \"kubernetes.io/projected/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-kube-api-access-vbvjb\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.395478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs" (OuterVolumeSpecName: "logs") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.407025 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.493971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494165 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494199 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvjb\" (UniqueName: \"kubernetes.io/projected/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-kube-api-access-vbvjb\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494221 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494293 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494341 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494407 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.494419 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ea94df7-893b-4dde-997a-72d9453f9ff8-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.495503 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.500515 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.502988 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.713588 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" path="/var/lib/kubelet/pods/1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3/volumes" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.718983 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4292f186-8667-43a4-90bb-1f5202e3d7c7" path="/var/lib/kubelet/pods/4292f186-8667-43a4-90bb-1f5202e3d7c7/volumes" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.748615 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts" (OuterVolumeSpecName: "scripts") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.749342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7" (OuterVolumeSpecName: "kube-api-access-6r7q7") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "kube-api-access-6r7q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.751457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.751502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.755197 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.755916 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.756554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvjb\" (UniqueName: \"kubernetes.io/projected/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-kube-api-access-vbvjb\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.758319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5006ab2-d2cb-45a1-b5b4-496b36d94bf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.811781 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.811845 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7q7\" (UniqueName: \"kubernetes.io/projected/3ea94df7-893b-4dde-997a-72d9453f9ff8-kube-api-access-6r7q7\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.811880 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 03 09:31:50 crc kubenswrapper[4856]: I1203 09:31:50.932080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.057251 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.064025 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" event={"ID":"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1","Type":"ContainerStarted","Data":"10273cd0213c46785a8f01644cec36674da8a9554f7105ae8a4706492aabc37d"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.064076 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" event={"ID":"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1","Type":"ContainerStarted","Data":"cb148beea5e51464e8a1dd53630544adb1025b9f6bbaf97892261ba5aabedc01"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.073097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-482j6" event={"ID":"3ea33628-dc6c-486d-8214-0c17593c5c65","Type":"ContainerStarted","Data":"81827d51a71e64f28a01cd822e516ff1c4de082bf2b5cf3bfafa187a7d4e928f"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.076491 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d5fb5d859-8njp2" event={"ID":"295f1863-c8b3-4e9a-b09f-24c393ac167c","Type":"ContainerStarted","Data":"016da843926d15d2700e5fb297ce36315c8d584a772771daa8a3919e2968471f"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.085938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqjjd" event={"ID":"2927120f-ce3e-4ca6-8522-80b99afcdcc8","Type":"ContainerStarted","Data":"0f4f35dc969e0914f86618d2e261c8e8be38de15f262d5f17ec2750caba29816"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.088245 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" podStartSLOduration=5.088220558 podStartE2EDuration="5.088220558s" podCreationTimestamp="2025-12-03 09:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:51.082111154 +0000 UTC m=+1179.265003455" watchObservedRunningTime="2025-12-03 09:31:51.088220558 +0000 UTC m=+1179.271112859" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.090905 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerStarted","Data":"0d9f2972ffd05e8fdebbe526ac453b3fd452867daedeee468288daa369220ccb"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.092707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5ktp" event={"ID":"d1e53d88-906a-49d5-8687-bac531c74375","Type":"ContainerStarted","Data":"355d73a2dbf662a5f84176f02db1167cb13552968636f5b11f4318dc36b96639"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.102098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ea94df7-893b-4dde-997a-72d9453f9ff8","Type":"ContainerDied","Data":"4e9f288999a0d606f8b85bd4fc461dab6e798eb2e8919a65f01ca9087d96aba4"} Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.102197 4856 scope.go:117] "RemoveContainer" containerID="bb969a2b5253a869cf863f56934a31d538d953f54401127ef10cf3677e4e7498" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.102380 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.125235 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-482j6" podStartSLOduration=5.125203689 podStartE2EDuration="5.125203689s" podCreationTimestamp="2025-12-03 09:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:51.099128402 +0000 UTC m=+1179.282020713" watchObservedRunningTime="2025-12-03 09:31:51.125203689 +0000 UTC m=+1179.308096000" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.176935 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.185546 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-wqjjd" podStartSLOduration=5.185508897 podStartE2EDuration="5.185508897s" podCreationTimestamp="2025-12-03 09:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:51.125112716 +0000 UTC m=+1179.308005017" watchObservedRunningTime="2025-12-03 09:31:51.185508897 +0000 UTC m=+1179.368401198" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.219794 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-c5ktp" podStartSLOduration=5.21976392 podStartE2EDuration="5.21976392s" podCreationTimestamp="2025-12-03 09:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:51.14868797 +0000 UTC m=+1179.331580271" watchObservedRunningTime="2025-12-03 09:31:51.21976392 +0000 UTC m=+1179.402656211" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.230293 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.311711 4856 scope.go:117] "RemoveContainer" containerID="f891e6da20b5529fa7af17674aa6ff33ba243d96d6d0cbc4b41365b420825fe8" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.401130 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.411331 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data" (OuterVolumeSpecName: "config-data") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.439650 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ea94df7-893b-4dde-997a-72d9453f9ff8" (UID: "3ea94df7-893b-4dde-997a-72d9453f9ff8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.445790 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.445848 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.445869 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea94df7-893b-4dde-997a-72d9453f9ff8-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.779688 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.790237 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.813406 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:51 crc kubenswrapper[4856]: E1203 09:31:51.815024 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-httpd" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.815105 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-httpd" Dec 03 09:31:51 crc kubenswrapper[4856]: E1203 09:31:51.815195 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-log" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.815257 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-log" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.815479 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-httpd" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.815579 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" containerName="glance-log" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.816607 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.819721 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.821306 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.839256 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958124 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958707 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958840 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:51 crc kubenswrapper[4856]: I1203 09:31:51.958878 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gsw\" (UniqueName: \"kubernetes.io/projected/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-kube-api-access-p7gsw\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.021297 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061098 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gsw\" (UniqueName: \"kubernetes.io/projected/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-kube-api-access-p7gsw\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061229 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061257 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061295 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.061940 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.062503 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.062848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.070141 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.074390 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.074684 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.076882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.104501 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gsw\" (UniqueName: \"kubernetes.io/projected/2ebc0dc7-337a-46c5-ae8e-98ca475977a0-kube-api-access-p7gsw\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.141492 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2ebc0dc7-337a-46c5-ae8e-98ca475977a0\") " pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.145615 4856 generic.go:334] "Generic (PLEG): container finished" podID="efc2b70e-a05b-4d56-87f6-2656e84d9a77" containerID="1671f5d17bf50cebd191032c03b62666e5e3f75392ac471607581f4adc3ac712" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.145866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f837-account-create-update-7xhll" event={"ID":"efc2b70e-a05b-4d56-87f6-2656e84d9a77","Type":"ContainerDied","Data":"1671f5d17bf50cebd191032c03b62666e5e3f75392ac471607581f4adc3ac712"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.151144 4856 generic.go:334] "Generic (PLEG): container finished" podID="c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" containerID="10273cd0213c46785a8f01644cec36674da8a9554f7105ae8a4706492aabc37d" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.151247 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" event={"ID":"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1","Type":"ContainerDied","Data":"10273cd0213c46785a8f01644cec36674da8a9554f7105ae8a4706492aabc37d"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.153469 4856 generic.go:334] "Generic (PLEG): container finished" podID="037e7d3b-3523-4406-b7d6-39dc9c9256c3" containerID="ec0cfd0dc6ab12c5648c443c38f05664f0f05ea234955cb92bc17bc9ac43a4c5" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.153529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" event={"ID":"037e7d3b-3523-4406-b7d6-39dc9c9256c3","Type":"ContainerDied","Data":"ec0cfd0dc6ab12c5648c443c38f05664f0f05ea234955cb92bc17bc9ac43a4c5"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.154824 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ea33628-dc6c-486d-8214-0c17593c5c65" containerID="81827d51a71e64f28a01cd822e516ff1c4de082bf2b5cf3bfafa187a7d4e928f" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.154878 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-482j6" event={"ID":"3ea33628-dc6c-486d-8214-0c17593c5c65","Type":"ContainerDied","Data":"81827d51a71e64f28a01cd822e516ff1c4de082bf2b5cf3bfafa187a7d4e928f"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.157202 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d5fb5d859-8njp2" event={"ID":"295f1863-c8b3-4e9a-b09f-24c393ac167c","Type":"ContainerStarted","Data":"196defd7fbdb67972ce443e1df01c7a8736628728d834f2ec2393e95041d800f"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.157244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d5fb5d859-8njp2" event={"ID":"295f1863-c8b3-4e9a-b09f-24c393ac167c","Type":"ContainerStarted","Data":"ec7e61682a68ab11ae57b84f38081fbcb1aa39c1f9dff1a28a3b8bef73d5f70f"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.157345 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.157460 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.158542 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.177233 4856 generic.go:334] "Generic (PLEG): container finished" podID="d1e53d88-906a-49d5-8687-bac531c74375" containerID="355d73a2dbf662a5f84176f02db1167cb13552968636f5b11f4318dc36b96639" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.177345 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5ktp" event={"ID":"d1e53d88-906a-49d5-8687-bac531c74375","Type":"ContainerDied","Data":"355d73a2dbf662a5f84176f02db1167cb13552968636f5b11f4318dc36b96639"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.188251 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2","Type":"ContainerStarted","Data":"ce75ae5c1f18a9b6cea2656f24d0be2676410b5d9eeea8ce91ddcaa795ad08ed"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.189862 4856 generic.go:334] "Generic (PLEG): container finished" podID="2927120f-ce3e-4ca6-8522-80b99afcdcc8" containerID="0f4f35dc969e0914f86618d2e261c8e8be38de15f262d5f17ec2750caba29816" exitCode=0 Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.189940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqjjd" event={"ID":"2927120f-ce3e-4ca6-8522-80b99afcdcc8","Type":"ContainerDied","Data":"0f4f35dc969e0914f86618d2e261c8e8be38de15f262d5f17ec2750caba29816"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.210152 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerStarted","Data":"3d633b2f73e9f156393f67d4494de8d0d8868a89dca013ec682b208ea860c7f6"} Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.240092 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d5fb5d859-8njp2" podStartSLOduration=13.24005565 podStartE2EDuration="13.24005565s" podCreationTimestamp="2025-12-03 09:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:52.202410283 +0000 UTC m=+1180.385302584" watchObservedRunningTime="2025-12-03 09:31:52.24005565 +0000 UTC m=+1180.422947951" Dec 03 09:31:52 crc kubenswrapper[4856]: I1203 09:31:52.716220 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea94df7-893b-4dde-997a-72d9453f9ff8" path="/var/lib/kubelet/pods/3ea94df7-893b-4dde-997a-72d9453f9ff8/volumes" Dec 03 09:31:53 crc kubenswrapper[4856]: I1203 09:31:53.132066 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 09:31:53 crc kubenswrapper[4856]: I1203 09:31:53.246634 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerStarted","Data":"cf0256360e1a28097ccabbf9d1c9b3682eadd8d9b3d953e2fce270b596170095"} Dec 03 09:31:53 crc kubenswrapper[4856]: I1203 09:31:53.263846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2","Type":"ContainerStarted","Data":"e6cdfa56415223ff872bc1a56898ff1c8ccd872cf9ad624e69f3197ad4f7d8ff"} Dec 03 09:31:53 crc kubenswrapper[4856]: I1203 09:31:53.273375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ebc0dc7-337a-46c5-ae8e-98ca475977a0","Type":"ContainerStarted","Data":"a54936fa06ec408fd2dab14df438433754a7c4eca40249d4f6668698da14dc89"} Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.345267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" event={"ID":"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1","Type":"ContainerDied","Data":"cb148beea5e51464e8a1dd53630544adb1025b9f6bbaf97892261ba5aabedc01"} Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.346033 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb148beea5e51464e8a1dd53630544adb1025b9f6bbaf97892261ba5aabedc01" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.352984 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.388059 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts\") pod \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.388324 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt9lq\" (UniqueName: \"kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq\") pod \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\" (UID: \"c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.389570 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" (UID: "c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.420634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq" (OuterVolumeSpecName: "kube-api-access-mt9lq") pod "c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" (UID: "c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1"). InnerVolumeSpecName "kube-api-access-mt9lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.508384 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.508789 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt9lq\" (UniqueName: \"kubernetes.io/projected/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1-kube-api-access-mt9lq\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.611186 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.694326 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.715670 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8pj\" (UniqueName: \"kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj\") pod \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.715851 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts\") pod \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\" (UID: \"efc2b70e-a05b-4d56-87f6-2656e84d9a77\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.720649 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efc2b70e-a05b-4d56-87f6-2656e84d9a77" (UID: "efc2b70e-a05b-4d56-87f6-2656e84d9a77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.728096 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj" (OuterVolumeSpecName: "kube-api-access-lr8pj") pod "efc2b70e-a05b-4d56-87f6-2656e84d9a77" (UID: "efc2b70e-a05b-4d56-87f6-2656e84d9a77"). InnerVolumeSpecName "kube-api-access-lr8pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.743870 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.796888 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.823155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbprd\" (UniqueName: \"kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd\") pod \"d1e53d88-906a-49d5-8687-bac531c74375\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.823422 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts\") pod \"d1e53d88-906a-49d5-8687-bac531c74375\" (UID: \"d1e53d88-906a-49d5-8687-bac531c74375\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.824165 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8pj\" (UniqueName: \"kubernetes.io/projected/efc2b70e-a05b-4d56-87f6-2656e84d9a77-kube-api-access-lr8pj\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.824185 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc2b70e-a05b-4d56-87f6-2656e84d9a77-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.832533 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1e53d88-906a-49d5-8687-bac531c74375" (UID: "d1e53d88-906a-49d5-8687-bac531c74375"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.845420 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd" (OuterVolumeSpecName: "kube-api-access-rbprd") pod "d1e53d88-906a-49d5-8687-bac531c74375" (UID: "d1e53d88-906a-49d5-8687-bac531c74375"). InnerVolumeSpecName "kube-api-access-rbprd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.867864 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.925585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjfbp\" (UniqueName: \"kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp\") pod \"3ea33628-dc6c-486d-8214-0c17593c5c65\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.925772 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x5qp\" (UniqueName: \"kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp\") pod \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.925881 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts\") pod \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\" (UID: \"037e7d3b-3523-4406-b7d6-39dc9c9256c3\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.925921 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts\") pod \"3ea33628-dc6c-486d-8214-0c17593c5c65\" (UID: \"3ea33628-dc6c-486d-8214-0c17593c5c65\") " Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.926649 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbprd\" (UniqueName: \"kubernetes.io/projected/d1e53d88-906a-49d5-8687-bac531c74375-kube-api-access-rbprd\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.926674 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e53d88-906a-49d5-8687-bac531c74375-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.926731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "037e7d3b-3523-4406-b7d6-39dc9c9256c3" (UID: "037e7d3b-3523-4406-b7d6-39dc9c9256c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.927305 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ea33628-dc6c-486d-8214-0c17593c5c65" (UID: "3ea33628-dc6c-486d-8214-0c17593c5c65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.943192 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp" (OuterVolumeSpecName: "kube-api-access-qjfbp") pod "3ea33628-dc6c-486d-8214-0c17593c5c65" (UID: "3ea33628-dc6c-486d-8214-0c17593c5c65"). InnerVolumeSpecName "kube-api-access-qjfbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:54 crc kubenswrapper[4856]: I1203 09:31:54.965842 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp" (OuterVolumeSpecName: "kube-api-access-5x5qp") pod "037e7d3b-3523-4406-b7d6-39dc9c9256c3" (UID: "037e7d3b-3523-4406-b7d6-39dc9c9256c3"). InnerVolumeSpecName "kube-api-access-5x5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.027377 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts\") pod \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.027440 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vn5l\" (UniqueName: \"kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l\") pod \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\" (UID: \"2927120f-ce3e-4ca6-8522-80b99afcdcc8\") " Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.028190 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjfbp\" (UniqueName: \"kubernetes.io/projected/3ea33628-dc6c-486d-8214-0c17593c5c65-kube-api-access-qjfbp\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.028209 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x5qp\" (UniqueName: \"kubernetes.io/projected/037e7d3b-3523-4406-b7d6-39dc9c9256c3-kube-api-access-5x5qp\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.028219 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037e7d3b-3523-4406-b7d6-39dc9c9256c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.028228 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ea33628-dc6c-486d-8214-0c17593c5c65-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.029876 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2927120f-ce3e-4ca6-8522-80b99afcdcc8" (UID: "2927120f-ce3e-4ca6-8522-80b99afcdcc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.040116 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l" (OuterVolumeSpecName: "kube-api-access-2vn5l") pod "2927120f-ce3e-4ca6-8522-80b99afcdcc8" (UID: "2927120f-ce3e-4ca6-8522-80b99afcdcc8"). InnerVolumeSpecName "kube-api-access-2vn5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.140700 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2927120f-ce3e-4ca6-8522-80b99afcdcc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.140782 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vn5l\" (UniqueName: \"kubernetes.io/projected/2927120f-ce3e-4ca6-8522-80b99afcdcc8-kube-api-access-2vn5l\") on node \"crc\" DevicePath \"\"" Dec 03 09:31:55 crc kubenswrapper[4856]: E1203 09:31:55.162063 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc254bdd1_0dcf_46df_9f1d_d3d0e54a8fb1.slice/crio-cb148beea5e51464e8a1dd53630544adb1025b9f6bbaf97892261ba5aabedc01\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc254bdd1_0dcf_46df_9f1d_d3d0e54a8fb1.slice\": RecentStats: unable to find data in memory cache]" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.364827 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5006ab2-d2cb-45a1-b5b4-496b36d94bf2","Type":"ContainerStarted","Data":"7675a1b79fc9f798b1fd4210b92fb2a4010c5e9dc401b8368ab1ed60bf91bdee"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.369388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-482j6" event={"ID":"3ea33628-dc6c-486d-8214-0c17593c5c65","Type":"ContainerDied","Data":"a8f5640a54d0bd03136777511cbeccd00ec89c8d3e437feff5edcf4ac46f8745"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.369440 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f5640a54d0bd03136777511cbeccd00ec89c8d3e437feff5edcf4ac46f8745" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.369548 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-482j6" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.387508 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wqjjd" event={"ID":"2927120f-ce3e-4ca6-8522-80b99afcdcc8","Type":"ContainerDied","Data":"1a88528a45811235afb865447514a1b8229b9518ee16c55c33f4f8289cfc5231"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.387563 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a88528a45811235afb865447514a1b8229b9518ee16c55c33f4f8289cfc5231" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.387660 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wqjjd" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.391577 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.391549594 podStartE2EDuration="5.391549594s" podCreationTimestamp="2025-12-03 09:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:55.39060901 +0000 UTC m=+1183.573501311" watchObservedRunningTime="2025-12-03 09:31:55.391549594 +0000 UTC m=+1183.574441895" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.394548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ebc0dc7-337a-46c5-ae8e-98ca475977a0","Type":"ContainerStarted","Data":"ab2628ef5656b83a66709448f338b96f41b520e06471aa90896da939bba23227"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.399868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5ktp" event={"ID":"d1e53d88-906a-49d5-8687-bac531c74375","Type":"ContainerDied","Data":"e3189b917603199d085a4df1d3d65cd828cd2bd7e6994679c8416e5554c51fd9"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.399943 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3189b917603199d085a4df1d3d65cd828cd2bd7e6994679c8416e5554c51fd9" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.400101 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5ktp" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.426358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerStarted","Data":"5427c79422b1806aa27548c294b958b31ac35c4f177463f732e0a4cf62cfe6c3"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.435832 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f837-account-create-update-7xhll" event={"ID":"efc2b70e-a05b-4d56-87f6-2656e84d9a77","Type":"ContainerDied","Data":"a63bf3a5dfb4e6a7c5bca069d247704dacd1d70fd3935ec626025ba3b61e0965"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.435884 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63bf3a5dfb4e6a7c5bca069d247704dacd1d70fd3935ec626025ba3b61e0965" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.435882 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f837-account-create-update-7xhll" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.446562 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36d7-account-create-update-5hhw7" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.446654 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.447759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72e7-account-create-update-w8vbn" event={"ID":"037e7d3b-3523-4406-b7d6-39dc9c9256c3","Type":"ContainerDied","Data":"1197dd413f88ac18457e94f0f481859c62eef0ab7e3bdf1c4e3d0144fbb38113"} Dec 03 09:31:55 crc kubenswrapper[4856]: I1203 09:31:55.447842 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1197dd413f88ac18457e94f0f481859c62eef0ab7e3bdf1c4e3d0144fbb38113" Dec 03 09:31:56 crc kubenswrapper[4856]: I1203 09:31:56.460430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ebc0dc7-337a-46c5-ae8e-98ca475977a0","Type":"ContainerStarted","Data":"785cbb1455dcd0b5b08bec1648549c7ca0cf9c8095577b0e37b3bde403f430b3"} Dec 03 09:31:56 crc kubenswrapper[4856]: I1203 09:31:56.500622 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.500589639 podStartE2EDuration="5.500589639s" podCreationTimestamp="2025-12-03 09:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:31:56.486766981 +0000 UTC m=+1184.669659302" watchObservedRunningTime="2025-12-03 09:31:56.500589639 +0000 UTC m=+1184.683481940" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.192959 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2jx7k"] Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194242 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2927120f-ce3e-4ca6-8522-80b99afcdcc8" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194266 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2927120f-ce3e-4ca6-8522-80b99afcdcc8" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194283 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037e7d3b-3523-4406-b7d6-39dc9c9256c3" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194293 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="037e7d3b-3523-4406-b7d6-39dc9c9256c3" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194323 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea33628-dc6c-486d-8214-0c17593c5c65" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194331 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea33628-dc6c-486d-8214-0c17593c5c65" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194355 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc2b70e-a05b-4d56-87f6-2656e84d9a77" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194363 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc2b70e-a05b-4d56-87f6-2656e84d9a77" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194373 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194380 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: E1203 09:31:57.194409 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e53d88-906a-49d5-8687-bac531c74375" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194418 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e53d88-906a-49d5-8687-bac531c74375" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194658 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="037e7d3b-3523-4406-b7d6-39dc9c9256c3" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194683 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2927120f-ce3e-4ca6-8522-80b99afcdcc8" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194696 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc2b70e-a05b-4d56-87f6-2656e84d9a77" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194713 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" containerName="mariadb-account-create-update" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194733 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e53d88-906a-49d5-8687-bac531c74375" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.194744 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea33628-dc6c-486d-8214-0c17593c5c65" containerName="mariadb-database-create" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.196261 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.200623 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.201134 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-56vpq" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.201376 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.211472 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2jx7k"] Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.293599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.293713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f599t\" (UniqueName: \"kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.293754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.293838 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.395522 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.395992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f599t\" (UniqueName: \"kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.396152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.396910 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.401266 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.401658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.412281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.522926 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerStarted","Data":"6b4da5b870c2fff7218ac48b58b9090dcf3e21458b7cfb125bde736a64421a4b"} Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.525197 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.544009 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f599t\" (UniqueName: \"kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t\") pod \"nova-cell0-conductor-db-sync-2jx7k\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.605850 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.731045142 podStartE2EDuration="8.605820648s" podCreationTimestamp="2025-12-03 09:31:49 +0000 UTC" firstStartedPulling="2025-12-03 09:31:50.794330797 +0000 UTC m=+1178.977223098" lastFinishedPulling="2025-12-03 09:31:56.669106293 +0000 UTC m=+1184.851998604" observedRunningTime="2025-12-03 09:31:57.60311905 +0000 UTC m=+1185.786011351" watchObservedRunningTime="2025-12-03 09:31:57.605820648 +0000 UTC m=+1185.788712949" Dec 03 09:31:57 crc kubenswrapper[4856]: I1203 09:31:57.813502 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:31:58 crc kubenswrapper[4856]: I1203 09:31:58.213644 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2jx7k"] Dec 03 09:31:58 crc kubenswrapper[4856]: I1203 09:31:58.535174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" event={"ID":"20bb4e2f-c274-4f88-a8b4-de6e9b737113","Type":"ContainerStarted","Data":"66f987df4b5cb3644be7e7645f32d0834b17bcf57c5936e9df85e6efa6578282"} Dec 03 09:31:59 crc kubenswrapper[4856]: I1203 09:31:59.586280 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:31:59 crc kubenswrapper[4856]: I1203 09:31:59.600347 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d5fb5d859-8njp2" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.058288 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.058817 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.124610 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.171404 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.582095 4856 generic.go:334] "Generic (PLEG): container finished" podID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerID="e112b0ed988513a47362d5c192e3a59fda82b7af151a23e36864a8eb36d0c0fd" exitCode=0 Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.582288 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerDied","Data":"e112b0ed988513a47362d5c192e3a59fda82b7af151a23e36864a8eb36d0c0fd"} Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.582856 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:01 crc kubenswrapper[4856]: I1203 09:32:01.582889 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.158507 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.158922 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.174112 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.263034 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.266999 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.353787 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctl5z\" (UniqueName: \"kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z\") pod \"e119392f-94d6-436b-ac48-e548d91a8f0a\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.353888 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle\") pod \"e119392f-94d6-436b-ac48-e548d91a8f0a\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.354174 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config\") pod \"e119392f-94d6-436b-ac48-e548d91a8f0a\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.354282 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config\") pod \"e119392f-94d6-436b-ac48-e548d91a8f0a\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.354387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs\") pod \"e119392f-94d6-436b-ac48-e548d91a8f0a\" (UID: \"e119392f-94d6-436b-ac48-e548d91a8f0a\") " Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.378601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e119392f-94d6-436b-ac48-e548d91a8f0a" (UID: "e119392f-94d6-436b-ac48-e548d91a8f0a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.384219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z" (OuterVolumeSpecName: "kube-api-access-ctl5z") pod "e119392f-94d6-436b-ac48-e548d91a8f0a" (UID: "e119392f-94d6-436b-ac48-e548d91a8f0a"). InnerVolumeSpecName "kube-api-access-ctl5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.459364 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.459396 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctl5z\" (UniqueName: \"kubernetes.io/projected/e119392f-94d6-436b-ac48-e548d91a8f0a-kube-api-access-ctl5z\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.568500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config" (OuterVolumeSpecName: "config") pod "e119392f-94d6-436b-ac48-e548d91a8f0a" (UID: "e119392f-94d6-436b-ac48-e548d91a8f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.572632 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.639166 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e119392f-94d6-436b-ac48-e548d91a8f0a" (UID: "e119392f-94d6-436b-ac48-e548d91a8f0a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.649913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fd85d868-sh6bv" event={"ID":"e119392f-94d6-436b-ac48-e548d91a8f0a","Type":"ContainerDied","Data":"3287c045fd93c4492180d8df98211aab998381229f00f41c160a1d6d7808bb03"} Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.650039 4856 scope.go:117] "RemoveContainer" containerID="89de9afc15304197d4096c2102f77a838bc3f096ae738c480b0548a5b97fd708" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.650382 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fd85d868-sh6bv" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.651720 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.652518 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.664200 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e119392f-94d6-436b-ac48-e548d91a8f0a" (UID: "e119392f-94d6-436b-ac48-e548d91a8f0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.675437 4856 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.675489 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e119392f-94d6-436b-ac48-e548d91a8f0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.767719 4856 scope.go:117] "RemoveContainer" containerID="e112b0ed988513a47362d5c192e3a59fda82b7af151a23e36864a8eb36d0c0fd" Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.982070 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:32:02 crc kubenswrapper[4856]: I1203 09:32:02.990185 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74fd85d868-sh6bv"] Dec 03 09:32:03 crc kubenswrapper[4856]: I1203 09:32:03.568216 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:03 crc kubenswrapper[4856]: I1203 09:32:03.568494 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-central-agent" containerID="cri-o://3d633b2f73e9f156393f67d4494de8d0d8868a89dca013ec682b208ea860c7f6" gracePeriod=30 Dec 03 09:32:03 crc kubenswrapper[4856]: I1203 09:32:03.568574 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="sg-core" containerID="cri-o://5427c79422b1806aa27548c294b958b31ac35c4f177463f732e0a4cf62cfe6c3" gracePeriod=30 Dec 03 09:32:03 crc kubenswrapper[4856]: I1203 09:32:03.568721 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-notification-agent" containerID="cri-o://cf0256360e1a28097ccabbf9d1c9b3682eadd8d9b3d953e2fce270b596170095" gracePeriod=30 Dec 03 09:32:03 crc kubenswrapper[4856]: I1203 09:32:03.568782 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="proxy-httpd" containerID="cri-o://6b4da5b870c2fff7218ac48b58b9090dcf3e21458b7cfb125bde736a64421a4b" gracePeriod=30 Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.381323 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.381535 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.689926 4856 generic.go:334] "Generic (PLEG): container finished" podID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerID="6b4da5b870c2fff7218ac48b58b9090dcf3e21458b7cfb125bde736a64421a4b" exitCode=0 Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.690178 4856 generic.go:334] "Generic (PLEG): container finished" podID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerID="5427c79422b1806aa27548c294b958b31ac35c4f177463f732e0a4cf62cfe6c3" exitCode=2 Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.690193 4856 generic.go:334] "Generic (PLEG): container finished" podID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerID="cf0256360e1a28097ccabbf9d1c9b3682eadd8d9b3d953e2fce270b596170095" exitCode=0 Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.690323 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.690335 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.717069 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" path="/var/lib/kubelet/pods/e119392f-94d6-436b-ac48-e548d91a8f0a/volumes" Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.719862 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerDied","Data":"6b4da5b870c2fff7218ac48b58b9090dcf3e21458b7cfb125bde736a64421a4b"} Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.720056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerDied","Data":"5427c79422b1806aa27548c294b958b31ac35c4f177463f732e0a4cf62cfe6c3"} Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.720075 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerDied","Data":"cf0256360e1a28097ccabbf9d1c9b3682eadd8d9b3d953e2fce270b596170095"} Dec 03 09:32:04 crc kubenswrapper[4856]: I1203 09:32:04.970669 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 09:32:05 crc kubenswrapper[4856]: I1203 09:32:05.704998 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 09:32:05 crc kubenswrapper[4856]: I1203 09:32:05.705558 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 09:32:05 crc kubenswrapper[4856]: I1203 09:32:05.793214 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 09:32:06 crc kubenswrapper[4856]: I1203 09:32:06.723482 4856 generic.go:334] "Generic (PLEG): container finished" podID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerID="3d633b2f73e9f156393f67d4494de8d0d8868a89dca013ec682b208ea860c7f6" exitCode=0 Dec 03 09:32:06 crc kubenswrapper[4856]: I1203 09:32:06.723527 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerDied","Data":"3d633b2f73e9f156393f67d4494de8d0d8868a89dca013ec682b208ea860c7f6"} Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.680658 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.796070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f3c997-bcf8-4fa8-b685-717479d7096e","Type":"ContainerDied","Data":"0d9f2972ffd05e8fdebbe526ac453b3fd452867daedeee468288daa369220ccb"} Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.796756 4856 scope.go:117] "RemoveContainer" containerID="6b4da5b870c2fff7218ac48b58b9090dcf3e21458b7cfb125bde736a64421a4b" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.796133 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.806598 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.806683 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.806725 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.806762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7752z\" (UniqueName: \"kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.806969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.807050 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.807118 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd\") pod \"89f3c997-bcf8-4fa8-b685-717479d7096e\" (UID: \"89f3c997-bcf8-4fa8-b685-717479d7096e\") " Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.807720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.808241 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.819132 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z" (OuterVolumeSpecName: "kube-api-access-7752z") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "kube-api-access-7752z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.820131 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts" (OuterVolumeSpecName: "scripts") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.824529 4856 scope.go:117] "RemoveContainer" containerID="5427c79422b1806aa27548c294b958b31ac35c4f177463f732e0a4cf62cfe6c3" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.848592 4856 scope.go:117] "RemoveContainer" containerID="cf0256360e1a28097ccabbf9d1c9b3682eadd8d9b3d953e2fce270b596170095" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.857480 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.883860 4856 scope.go:117] "RemoveContainer" containerID="3d633b2f73e9f156393f67d4494de8d0d8868a89dca013ec682b208ea860c7f6" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.910235 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.910276 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.910292 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7752z\" (UniqueName: \"kubernetes.io/projected/89f3c997-bcf8-4fa8-b685-717479d7096e-kube-api-access-7752z\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.910306 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.910315 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f3c997-bcf8-4fa8-b685-717479d7096e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.928242 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:11 crc kubenswrapper[4856]: I1203 09:32:11.944683 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data" (OuterVolumeSpecName: "config-data") pod "89f3c997-bcf8-4fa8-b685-717479d7096e" (UID: "89f3c997-bcf8-4fa8-b685-717479d7096e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.013116 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.013165 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f3c997-bcf8-4fa8-b685-717479d7096e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.185517 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.230647 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.252380 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253122 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-central-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253142 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-central-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253156 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253162 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253170 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="proxy-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253176 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="proxy-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253185 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="sg-core" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253191 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="sg-core" Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253200 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-notification-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253206 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-notification-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: E1203 09:32:12.253216 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-api" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.253222 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-api" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254319 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="proxy-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254342 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-httpd" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254353 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-notification-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254374 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e119392f-94d6-436b-ac48-e548d91a8f0a" containerName="neutron-api" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254387 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="ceilometer-central-agent" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.254407 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" containerName="sg-core" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.256791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.261453 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.264735 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.268564 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.332085 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.332385 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58v5l\" (UniqueName: \"kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.332554 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.332684 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.332913 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.333026 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.333146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435428 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435485 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435555 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435577 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435599 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435661 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.435693 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58v5l\" (UniqueName: \"kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.436098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.436564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.443748 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.444801 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.450983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.451764 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.456075 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58v5l\" (UniqueName: \"kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l\") pod \"ceilometer-0\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.594710 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.717326 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f3c997-bcf8-4fa8-b685-717479d7096e" path="/var/lib/kubelet/pods/89f3c997-bcf8-4fa8-b685-717479d7096e/volumes" Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.813863 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" event={"ID":"20bb4e2f-c274-4f88-a8b4-de6e9b737113","Type":"ContainerStarted","Data":"2074d185adc5cd92b3be854bd05a72d8ee2cb15cddd3adead1ab6c9852b7e56d"} Dec 03 09:32:12 crc kubenswrapper[4856]: I1203 09:32:12.843892 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" podStartSLOduration=2.537553368 podStartE2EDuration="15.843855345s" podCreationTimestamp="2025-12-03 09:31:57 +0000 UTC" firstStartedPulling="2025-12-03 09:31:58.249044504 +0000 UTC m=+1186.431936805" lastFinishedPulling="2025-12-03 09:32:11.555346481 +0000 UTC m=+1199.738238782" observedRunningTime="2025-12-03 09:32:12.835464564 +0000 UTC m=+1201.018356865" watchObservedRunningTime="2025-12-03 09:32:12.843855345 +0000 UTC m=+1201.026747636" Dec 03 09:32:13 crc kubenswrapper[4856]: I1203 09:32:13.122997 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:32:13 crc kubenswrapper[4856]: I1203 09:32:13.137097 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:13 crc kubenswrapper[4856]: I1203 09:32:13.835006 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerStarted","Data":"068f3198525c8cbe2827fb9333e975c53607a9abf17e7dbbc11d8ffcb6999ed2"} Dec 03 09:32:14 crc kubenswrapper[4856]: I1203 09:32:14.846561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerStarted","Data":"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db"} Dec 03 09:32:14 crc kubenswrapper[4856]: I1203 09:32:14.847159 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerStarted","Data":"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b"} Dec 03 09:32:15 crc kubenswrapper[4856]: I1203 09:32:15.908304 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerStarted","Data":"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798"} Dec 03 09:32:16 crc kubenswrapper[4856]: I1203 09:32:16.942045 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerStarted","Data":"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac"} Dec 03 09:32:16 crc kubenswrapper[4856]: I1203 09:32:16.942610 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:32:16 crc kubenswrapper[4856]: I1203 09:32:16.990008 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5758738380000001 podStartE2EDuration="4.989978743s" podCreationTimestamp="2025-12-03 09:32:12 +0000 UTC" firstStartedPulling="2025-12-03 09:32:13.122675336 +0000 UTC m=+1201.305567637" lastFinishedPulling="2025-12-03 09:32:16.536780241 +0000 UTC m=+1204.719672542" observedRunningTime="2025-12-03 09:32:16.963077145 +0000 UTC m=+1205.145969456" watchObservedRunningTime="2025-12-03 09:32:16.989978743 +0000 UTC m=+1205.172871054" Dec 03 09:32:18 crc kubenswrapper[4856]: I1203 09:32:18.471676 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 09:32:18 crc kubenswrapper[4856]: I1203 09:32:18.472150 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1bb3b31a-4225-44ba-a8c8-41f6a46fb6c3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: i/o timeout" Dec 03 09:32:24 crc kubenswrapper[4856]: I1203 09:32:24.650330 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:24 crc kubenswrapper[4856]: I1203 09:32:24.651872 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-central-agent" containerID="cri-o://8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" gracePeriod=30 Dec 03 09:32:24 crc kubenswrapper[4856]: I1203 09:32:24.652023 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-notification-agent" containerID="cri-o://06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" gracePeriod=30 Dec 03 09:32:24 crc kubenswrapper[4856]: I1203 09:32:24.651964 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="sg-core" containerID="cri-o://c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" gracePeriod=30 Dec 03 09:32:24 crc kubenswrapper[4856]: I1203 09:32:24.652183 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="proxy-httpd" containerID="cri-o://aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" gracePeriod=30 Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.037425 4856 generic.go:334] "Generic (PLEG): container finished" podID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerID="aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" exitCode=0 Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.038334 4856 generic.go:334] "Generic (PLEG): container finished" podID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerID="c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" exitCode=2 Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.037487 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerDied","Data":"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac"} Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.038395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerDied","Data":"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798"} Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.601718 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.780969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781065 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781119 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781153 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58v5l\" (UniqueName: \"kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781193 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781405 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data\") pod \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\" (UID: \"f8f7c4c9-04a5-42d7-9eb8-b65caa272499\") " Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781765 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.781948 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.782401 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.790536 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts" (OuterVolumeSpecName: "scripts") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.795190 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l" (OuterVolumeSpecName: "kube-api-access-58v5l") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "kube-api-access-58v5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.827642 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.866106 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.883698 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.883744 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.883759 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58v5l\" (UniqueName: \"kubernetes.io/projected/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-kube-api-access-58v5l\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.883773 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.883786 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.926381 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data" (OuterVolumeSpecName: "config-data") pod "f8f7c4c9-04a5-42d7-9eb8-b65caa272499" (UID: "f8f7c4c9-04a5-42d7-9eb8-b65caa272499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:25 crc kubenswrapper[4856]: I1203 09:32:25.985786 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8f7c4c9-04a5-42d7-9eb8-b65caa272499-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.051833 4856 generic.go:334] "Generic (PLEG): container finished" podID="20bb4e2f-c274-4f88-a8b4-de6e9b737113" containerID="2074d185adc5cd92b3be854bd05a72d8ee2cb15cddd3adead1ab6c9852b7e56d" exitCode=0 Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.051912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" event={"ID":"20bb4e2f-c274-4f88-a8b4-de6e9b737113","Type":"ContainerDied","Data":"2074d185adc5cd92b3be854bd05a72d8ee2cb15cddd3adead1ab6c9852b7e56d"} Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055554 4856 generic.go:334] "Generic (PLEG): container finished" podID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerID="06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" exitCode=0 Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055581 4856 generic.go:334] "Generic (PLEG): container finished" podID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerID="8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" exitCode=0 Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerDied","Data":"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db"} Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerDied","Data":"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b"} Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8f7c4c9-04a5-42d7-9eb8-b65caa272499","Type":"ContainerDied","Data":"068f3198525c8cbe2827fb9333e975c53607a9abf17e7dbbc11d8ffcb6999ed2"} Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055673 4856 scope.go:117] "RemoveContainer" containerID="aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.055747 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.090848 4856 scope.go:117] "RemoveContainer" containerID="c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.111058 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.119030 4856 scope.go:117] "RemoveContainer" containerID="06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.124791 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136003 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.136564 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-central-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136597 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-central-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.136617 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="proxy-httpd" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136627 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="proxy-httpd" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.136659 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-notification-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136669 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-notification-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.136685 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="sg-core" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136692 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="sg-core" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136950 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-notification-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136971 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="ceilometer-central-agent" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.136992 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="proxy-httpd" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.137013 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" containerName="sg-core" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.139241 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.142304 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.142421 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.157189 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.158859 4856 scope.go:117] "RemoveContainer" containerID="8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.184516 4856 scope.go:117] "RemoveContainer" containerID="aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.185302 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac\": container with ID starting with aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac not found: ID does not exist" containerID="aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.185369 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac"} err="failed to get container status \"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac\": rpc error: code = NotFound desc = could not find container \"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac\": container with ID starting with aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.185421 4856 scope.go:117] "RemoveContainer" containerID="c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.185984 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798\": container with ID starting with c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798 not found: ID does not exist" containerID="c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.186033 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798"} err="failed to get container status \"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798\": rpc error: code = NotFound desc = could not find container \"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798\": container with ID starting with c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798 not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.186060 4856 scope.go:117] "RemoveContainer" containerID="06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.186962 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db\": container with ID starting with 06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db not found: ID does not exist" containerID="06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187015 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db"} err="failed to get container status \"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db\": rpc error: code = NotFound desc = could not find container \"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db\": container with ID starting with 06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187035 4856 scope.go:117] "RemoveContainer" containerID="8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" Dec 03 09:32:26 crc kubenswrapper[4856]: E1203 09:32:26.187510 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b\": container with ID starting with 8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b not found: ID does not exist" containerID="8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187572 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b"} err="failed to get container status \"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b\": rpc error: code = NotFound desc = could not find container \"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b\": container with ID starting with 8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187590 4856 scope.go:117] "RemoveContainer" containerID="aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187936 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac"} err="failed to get container status \"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac\": rpc error: code = NotFound desc = could not find container \"aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac\": container with ID starting with aee31a43a69dca9f3286d758c804e962a6ccaa693031a84ed6a9be0d2eea2fac not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.187992 4856 scope.go:117] "RemoveContainer" containerID="c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.188506 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798"} err="failed to get container status \"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798\": rpc error: code = NotFound desc = could not find container \"c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798\": container with ID starting with c8a1ad1cdab44490bcdaebc773a67c7cd01151582ec7fe5b783f067fad8ac798 not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.188594 4856 scope.go:117] "RemoveContainer" containerID="06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.188968 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db"} err="failed to get container status \"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db\": rpc error: code = NotFound desc = could not find container \"06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db\": container with ID starting with 06ee4af78fd09e4555dbe457e03cd0887c60578657c82a843104666280ecf4db not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.188996 4856 scope.go:117] "RemoveContainer" containerID="8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.189778 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b"} err="failed to get container status \"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b\": rpc error: code = NotFound desc = could not find container \"8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b\": container with ID starting with 8afa38a4f1306e1298005b6c51076b31d53786d620f4b379dc56e2fb52d0675b not found: ID does not exist" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.295558 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296200 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296606 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nk5j\" (UniqueName: \"kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.296844 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.398962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nk5j\" (UniqueName: \"kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399112 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399153 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399208 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.399305 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.401276 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.403568 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.404071 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.404525 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.409663 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.426259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nk5j\" (UniqueName: \"kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.427853 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data\") pod \"ceilometer-0\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.470120 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.708082 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f7c4c9-04a5-42d7-9eb8-b65caa272499" path="/var/lib/kubelet/pods/f8f7c4c9-04a5-42d7-9eb8-b65caa272499/volumes" Dec 03 09:32:26 crc kubenswrapper[4856]: I1203 09:32:26.923462 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.067243 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerStarted","Data":"e8698b494f79599a3eb2bd0e1be24246632bdb00018d50a02304bb2cb8ea8c59"} Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.508217 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.625263 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle\") pod \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.625407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts\") pod \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.625450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f599t\" (UniqueName: \"kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t\") pod \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.625490 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data\") pod \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\" (UID: \"20bb4e2f-c274-4f88-a8b4-de6e9b737113\") " Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.631967 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts" (OuterVolumeSpecName: "scripts") pod "20bb4e2f-c274-4f88-a8b4-de6e9b737113" (UID: "20bb4e2f-c274-4f88-a8b4-de6e9b737113"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.632433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t" (OuterVolumeSpecName: "kube-api-access-f599t") pod "20bb4e2f-c274-4f88-a8b4-de6e9b737113" (UID: "20bb4e2f-c274-4f88-a8b4-de6e9b737113"). InnerVolumeSpecName "kube-api-access-f599t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.658749 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20bb4e2f-c274-4f88-a8b4-de6e9b737113" (UID: "20bb4e2f-c274-4f88-a8b4-de6e9b737113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.661674 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data" (OuterVolumeSpecName: "config-data") pod "20bb4e2f-c274-4f88-a8b4-de6e9b737113" (UID: "20bb4e2f-c274-4f88-a8b4-de6e9b737113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.727583 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.727644 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f599t\" (UniqueName: \"kubernetes.io/projected/20bb4e2f-c274-4f88-a8b4-de6e9b737113-kube-api-access-f599t\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.727658 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:27 crc kubenswrapper[4856]: I1203 09:32:27.727667 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bb4e2f-c274-4f88-a8b4-de6e9b737113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.083662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" event={"ID":"20bb4e2f-c274-4f88-a8b4-de6e9b737113","Type":"ContainerDied","Data":"66f987df4b5cb3644be7e7645f32d0834b17bcf57c5936e9df85e6efa6578282"} Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.083757 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f987df4b5cb3644be7e7645f32d0834b17bcf57c5936e9df85e6efa6578282" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.083792 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2jx7k" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.087106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerStarted","Data":"2faeaabf7a368bd1df8f36bf868af9abb7540f888d9f4b6ed2a8923b9a185cbd"} Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.260489 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 09:32:28 crc kubenswrapper[4856]: E1203 09:32:28.260966 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bb4e2f-c274-4f88-a8b4-de6e9b737113" containerName="nova-cell0-conductor-db-sync" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.260994 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bb4e2f-c274-4f88-a8b4-de6e9b737113" containerName="nova-cell0-conductor-db-sync" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.261223 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bb4e2f-c274-4f88-a8b4-de6e9b737113" containerName="nova-cell0-conductor-db-sync" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.261961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.269930 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.271369 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-56vpq" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.271454 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.443875 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5cs\" (UniqueName: \"kubernetes.io/projected/509448a9-9abb-4e44-b37f-79faeadec13e-kube-api-access-jh5cs\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.443943 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.444072 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.546012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5cs\" (UniqueName: \"kubernetes.io/projected/509448a9-9abb-4e44-b37f-79faeadec13e-kube-api-access-jh5cs\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.546091 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.546187 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.554164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.555786 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509448a9-9abb-4e44-b37f-79faeadec13e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.565347 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5cs\" (UniqueName: \"kubernetes.io/projected/509448a9-9abb-4e44-b37f-79faeadec13e-kube-api-access-jh5cs\") pod \"nova-cell0-conductor-0\" (UID: \"509448a9-9abb-4e44-b37f-79faeadec13e\") " pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:28 crc kubenswrapper[4856]: I1203 09:32:28.584852 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:29 crc kubenswrapper[4856]: I1203 09:32:29.101254 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 09:32:30 crc kubenswrapper[4856]: I1203 09:32:30.123084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerStarted","Data":"553957dcb991374ce9351eb70abb0e00a658a243979f55da766cc9746cc45117"} Dec 03 09:32:30 crc kubenswrapper[4856]: I1203 09:32:30.125233 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"509448a9-9abb-4e44-b37f-79faeadec13e","Type":"ContainerStarted","Data":"f8be4fa4ad05455c7763189e50ffd4205d1aac5a67b53ebdd0de999966cd26cf"} Dec 03 09:32:30 crc kubenswrapper[4856]: I1203 09:32:30.125307 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"509448a9-9abb-4e44-b37f-79faeadec13e","Type":"ContainerStarted","Data":"ded2989a96907eef23f5c7eda102c5d274313a3929f8748107d30e26784a97fd"} Dec 03 09:32:30 crc kubenswrapper[4856]: I1203 09:32:30.125436 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:30 crc kubenswrapper[4856]: I1203 09:32:30.152952 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.152923259 podStartE2EDuration="2.152923259s" podCreationTimestamp="2025-12-03 09:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:30.145257416 +0000 UTC m=+1218.328149737" watchObservedRunningTime="2025-12-03 09:32:30.152923259 +0000 UTC m=+1218.335815570" Dec 03 09:32:31 crc kubenswrapper[4856]: I1203 09:32:31.141440 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerStarted","Data":"82a9ab95cbbbf7dec912f2d8c66e0c8eb619c7552888114cf2dfb4d0b62dce69"} Dec 03 09:32:32 crc kubenswrapper[4856]: I1203 09:32:32.152172 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerStarted","Data":"734d87fbe647bc9c3bcd98f3c5945dc09efba759c2170c8ec585c5a6072c276f"} Dec 03 09:32:32 crc kubenswrapper[4856]: I1203 09:32:32.153674 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:32:32 crc kubenswrapper[4856]: I1203 09:32:32.180921 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.56871836 podStartE2EDuration="6.180881663s" podCreationTimestamp="2025-12-03 09:32:26 +0000 UTC" firstStartedPulling="2025-12-03 09:32:26.92771365 +0000 UTC m=+1215.110605941" lastFinishedPulling="2025-12-03 09:32:31.539876933 +0000 UTC m=+1219.722769244" observedRunningTime="2025-12-03 09:32:32.175522168 +0000 UTC m=+1220.358414469" watchObservedRunningTime="2025-12-03 09:32:32.180881663 +0000 UTC m=+1220.363773964" Dec 03 09:32:38 crc kubenswrapper[4856]: I1203 09:32:38.625259 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.172921 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5sgp5"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.174794 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.179113 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.185493 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.186448 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5sgp5"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.358726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.358971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrzw\" (UniqueName: \"kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.359023 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.359192 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.361630 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.366184 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.369453 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.389106 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.460649 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrzw\" (UniqueName: \"kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.460700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.460752 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.460818 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.473329 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.478564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.493436 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrzw\" (UniqueName: \"kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.510748 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts\") pod \"nova-cell0-cell-mapping-5sgp5\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.512388 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.515692 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.521645 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.554933 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.564744 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.564929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.564968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.565071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4lb\" (UniqueName: \"kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.572890 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.574639 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.580038 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.625140 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.654962 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.658101 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.666253 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.668461 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.668569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669220 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669300 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669407 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669449 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669508 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qld\" (UniqueName: \"kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669579 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.669637 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4lb\" (UniqueName: \"kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.683198 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.696029 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.704290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4lb\" (UniqueName: \"kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.705335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.772904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.773713 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.776215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.776927 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7tjk\" (UniqueName: \"kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.776996 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777049 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6qld\" (UniqueName: \"kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777262 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qfh\" (UniqueName: \"kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777393 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.777851 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.778903 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.779073 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.787583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.790978 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.811307 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.836667 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6qld\" (UniqueName: \"kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld\") pod \"nova-metadata-0\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.853412 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.880500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qfh\" (UniqueName: \"kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881042 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881123 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881158 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881222 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxj2\" (UniqueName: \"kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881256 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881288 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7tjk\" (UniqueName: \"kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.881368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.887719 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.891327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.894677 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.897864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.906247 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qfh\" (UniqueName: \"kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh\") pod \"nova-scheduler-0\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.909753 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7tjk\" (UniqueName: \"kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk\") pod \"nova-cell1-novncproxy-0\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.926744 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.951260 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.987683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.987777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.987858 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.987907 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxj2\" (UniqueName: \"kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.987952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.988003 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.989315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.989347 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.990131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.991025 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.991178 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:39 crc kubenswrapper[4856]: I1203 09:32:39.991652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.068947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxj2\" (UniqueName: \"kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2\") pod \"dnsmasq-dns-bccf8f775-294px\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.159382 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.172278 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:40 crc kubenswrapper[4856]: W1203 09:32:40.445096 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bb4a4f4_8246_453d_a39d_db70774f8e5b.slice/crio-ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c WatchSource:0}: Error finding container ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c: Status 404 returned error can't find the container with id ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.445790 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5sgp5"] Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.532081 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smxjg"] Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.533492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.543088 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.544905 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.563276 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smxjg"] Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.610983 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.611094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.611164 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.611269 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xp7h\" (UniqueName: \"kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.654595 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:40 crc kubenswrapper[4856]: W1203 09:32:40.668571 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8de6f3_82dc_4897_8036_611d501bfa17.slice/crio-23f433998a333f8c93f7d87d2065820bde549fefae0901626f573f3625dc3046 WatchSource:0}: Error finding container 23f433998a333f8c93f7d87d2065820bde549fefae0901626f573f3625dc3046: Status 404 returned error can't find the container with id 23f433998a333f8c93f7d87d2065820bde549fefae0901626f573f3625dc3046 Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.713875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.714129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xp7h\" (UniqueName: \"kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.714235 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.714288 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.727123 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.727866 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.730620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.755652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xp7h\" (UniqueName: \"kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h\") pod \"nova-cell1-conductor-db-sync-smxjg\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.760220 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:32:40 crc kubenswrapper[4856]: W1203 09:32:40.839186 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3afba738_979a_4454_8a8b_cc59387b2814.slice/crio-a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31 WatchSource:0}: Error finding container a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31: Status 404 returned error can't find the container with id a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31 Dec 03 09:32:40 crc kubenswrapper[4856]: W1203 09:32:40.938184 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2ca111_4677_4c62_85ca_92240c89b835.slice/crio-4b3bcbc1ec6dfe43f869bc649c0490e456097bb593955b30f9f6e2baf6ae7247 WatchSource:0}: Error finding container 4b3bcbc1ec6dfe43f869bc649c0490e456097bb593955b30f9f6e2baf6ae7247: Status 404 returned error can't find the container with id 4b3bcbc1ec6dfe43f869bc649c0490e456097bb593955b30f9f6e2baf6ae7247 Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.940368 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.957885 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:32:40 crc kubenswrapper[4856]: I1203 09:32:40.973925 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:32:40 crc kubenswrapper[4856]: W1203 09:32:40.976614 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8857ca9b_3b50_4d9f_a9ca_71e0dba1aa85.slice/crio-83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e WatchSource:0}: Error finding container 83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e: Status 404 returned error can't find the container with id 83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.022486 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.283045 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerStarted","Data":"a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.286056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerStarted","Data":"6f1aa7610445ce995ac96720e5f238170417bd237eb515195733ffe495aa0e18"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.286126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerStarted","Data":"2881b17e0185bbac7dfc0077869fdc4fd43b74613c9d7c8e200ca0601add8923"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.288910 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85","Type":"ContainerStarted","Data":"83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.298671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerStarted","Data":"23f433998a333f8c93f7d87d2065820bde549fefae0901626f573f3625dc3046"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.301798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c2ca111-4677-4c62-85ca-92240c89b835","Type":"ContainerStarted","Data":"4b3bcbc1ec6dfe43f869bc649c0490e456097bb593955b30f9f6e2baf6ae7247"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.329272 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5sgp5" event={"ID":"6bb4a4f4-8246-453d-a39d-db70774f8e5b","Type":"ContainerStarted","Data":"a522ca6f0c630c607e4528e539c3a411906e150e45b2a4f574b26f27e734d9fc"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.329380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5sgp5" event={"ID":"6bb4a4f4-8246-453d-a39d-db70774f8e5b","Type":"ContainerStarted","Data":"ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c"} Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.378379 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5sgp5" podStartSLOduration=2.378357981 podStartE2EDuration="2.378357981s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:41.353469825 +0000 UTC m=+1229.536362126" watchObservedRunningTime="2025-12-03 09:32:41.378357981 +0000 UTC m=+1229.561250282" Dec 03 09:32:41 crc kubenswrapper[4856]: I1203 09:32:41.641556 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smxjg"] Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.356947 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smxjg" event={"ID":"7cf2dec0-9031-43e3-8f9f-90b36ce4b786","Type":"ContainerStarted","Data":"26df00eb15714b6f4720d01cf7aa8ff48fd215be3d91fadcfb73bd3db75ea863"} Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.357522 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smxjg" event={"ID":"7cf2dec0-9031-43e3-8f9f-90b36ce4b786","Type":"ContainerStarted","Data":"4681bbab2df4630902ea91a0b891280566d9145b981d61502810dda2f47277cd"} Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.371438 4856 generic.go:334] "Generic (PLEG): container finished" podID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerID="6f1aa7610445ce995ac96720e5f238170417bd237eb515195733ffe495aa0e18" exitCode=0 Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.371548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerDied","Data":"6f1aa7610445ce995ac96720e5f238170417bd237eb515195733ffe495aa0e18"} Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.371651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerStarted","Data":"32af87210c51d65938f2028acf3b804d714fff362c9d1c6cbba1c5be49ff117d"} Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.415742 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-smxjg" podStartSLOduration=2.415706972 podStartE2EDuration="2.415706972s" podCreationTimestamp="2025-12-03 09:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:42.387207824 +0000 UTC m=+1230.570100115" watchObservedRunningTime="2025-12-03 09:32:42.415706972 +0000 UTC m=+1230.598599273" Dec 03 09:32:42 crc kubenswrapper[4856]: I1203 09:32:42.425791 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-294px" podStartSLOduration=3.425760055 podStartE2EDuration="3.425760055s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:42.411496656 +0000 UTC m=+1230.594388977" watchObservedRunningTime="2025-12-03 09:32:42.425760055 +0000 UTC m=+1230.608652356" Dec 03 09:32:43 crc kubenswrapper[4856]: I1203 09:32:43.383872 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:43 crc kubenswrapper[4856]: I1203 09:32:43.640328 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:32:43 crc kubenswrapper[4856]: I1203 09:32:43.652294 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.407074 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerStarted","Data":"1bb0e1c52ada30b20ead5f9de883fb4a2e3f0a96310bbc8f71d481540471be1f"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.408820 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerStarted","Data":"6259f5efda47708ddb67a4cf560fe3ef32c96f3929ab8f3c879ca7307733e840"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.414618 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c2ca111-4677-4c62-85ca-92240c89b835","Type":"ContainerStarted","Data":"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.418432 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerStarted","Data":"9b91890994058ed2d1cea3a178dad0ce7f8f53d2ff22725ed24455e7d3427931"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.418466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerStarted","Data":"af2df5ec55472255e2b704174db5a220a62d41003698b87d32f1874644d4add1"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.418606 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-log" containerID="cri-o://af2df5ec55472255e2b704174db5a220a62d41003698b87d32f1874644d4add1" gracePeriod=30 Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.419176 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-metadata" containerID="cri-o://9b91890994058ed2d1cea3a178dad0ce7f8f53d2ff22725ed24455e7d3427931" gracePeriod=30 Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.430109 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85","Type":"ContainerStarted","Data":"b96acb939b4cc0c318b8d5095eb6d6d124c01b211d726e06f946e5def63cf23f"} Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.430288 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b96acb939b4cc0c318b8d5095eb6d6d124c01b211d726e06f946e5def63cf23f" gracePeriod=30 Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.452616 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.440809624 podStartE2EDuration="6.452584499s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="2025-12-03 09:32:40.681521905 +0000 UTC m=+1228.864414206" lastFinishedPulling="2025-12-03 09:32:44.69329676 +0000 UTC m=+1232.876189081" observedRunningTime="2025-12-03 09:32:45.448543517 +0000 UTC m=+1233.631435828" watchObservedRunningTime="2025-12-03 09:32:45.452584499 +0000 UTC m=+1233.635476800" Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.474068 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.724548558 podStartE2EDuration="6.474050119s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="2025-12-03 09:32:40.941630705 +0000 UTC m=+1229.124523006" lastFinishedPulling="2025-12-03 09:32:44.691132276 +0000 UTC m=+1232.874024567" observedRunningTime="2025-12-03 09:32:45.471479655 +0000 UTC m=+1233.654371976" watchObservedRunningTime="2025-12-03 09:32:45.474050119 +0000 UTC m=+1233.656942420" Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.523842 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.676668952 podStartE2EDuration="6.523778961s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="2025-12-03 09:32:40.84137768 +0000 UTC m=+1229.024269981" lastFinishedPulling="2025-12-03 09:32:44.688487679 +0000 UTC m=+1232.871379990" observedRunningTime="2025-12-03 09:32:45.5026581 +0000 UTC m=+1233.685550401" watchObservedRunningTime="2025-12-03 09:32:45.523778961 +0000 UTC m=+1233.706671272" Dec 03 09:32:45 crc kubenswrapper[4856]: I1203 09:32:45.536600 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.832464096 podStartE2EDuration="6.536580914s" podCreationTimestamp="2025-12-03 09:32:39 +0000 UTC" firstStartedPulling="2025-12-03 09:32:40.984422732 +0000 UTC m=+1229.167315033" lastFinishedPulling="2025-12-03 09:32:44.68853955 +0000 UTC m=+1232.871431851" observedRunningTime="2025-12-03 09:32:45.531636209 +0000 UTC m=+1233.714528520" watchObservedRunningTime="2025-12-03 09:32:45.536580914 +0000 UTC m=+1233.719473215" Dec 03 09:32:46 crc kubenswrapper[4856]: I1203 09:32:46.446268 4856 generic.go:334] "Generic (PLEG): container finished" podID="3afba738-979a-4454-8a8b-cc59387b2814" containerID="af2df5ec55472255e2b704174db5a220a62d41003698b87d32f1874644d4add1" exitCode=143 Dec 03 09:32:46 crc kubenswrapper[4856]: I1203 09:32:46.446434 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerDied","Data":"af2df5ec55472255e2b704174db5a220a62d41003698b87d32f1874644d4add1"} Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.927835 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.928226 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.956271 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.956342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.987243 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.991621 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:32:49 crc kubenswrapper[4856]: I1203 09:32:49.992957 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.161035 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.175185 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.267061 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.267528 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="dnsmasq-dns" containerID="cri-o://9848719d68973976d64288dc87eb709b07be96f0b7cc3cdc30176f4d28a1ce98" gracePeriod=10 Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.512388 4856 generic.go:334] "Generic (PLEG): container finished" podID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerID="9848719d68973976d64288dc87eb709b07be96f0b7cc3cdc30176f4d28a1ce98" exitCode=0 Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.512637 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" event={"ID":"5db70cd6-4b7f-4586-8f81-37066c3ef690","Type":"ContainerDied","Data":"9848719d68973976d64288dc87eb709b07be96f0b7cc3cdc30176f4d28a1ce98"} Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.519261 4856 generic.go:334] "Generic (PLEG): container finished" podID="7cf2dec0-9031-43e3-8f9f-90b36ce4b786" containerID="26df00eb15714b6f4720d01cf7aa8ff48fd215be3d91fadcfb73bd3db75ea863" exitCode=0 Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.521434 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smxjg" event={"ID":"7cf2dec0-9031-43e3-8f9f-90b36ce4b786","Type":"ContainerDied","Data":"26df00eb15714b6f4720d01cf7aa8ff48fd215be3d91fadcfb73bd3db75ea863"} Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.529100 4856 generic.go:334] "Generic (PLEG): container finished" podID="6bb4a4f4-8246-453d-a39d-db70774f8e5b" containerID="a522ca6f0c630c607e4528e539c3a411906e150e45b2a4f574b26f27e734d9fc" exitCode=0 Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.530574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5sgp5" event={"ID":"6bb4a4f4-8246-453d-a39d-db70774f8e5b","Type":"ContainerDied","Data":"a522ca6f0c630c607e4528e539c3a411906e150e45b2a4f574b26f27e734d9fc"} Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.574594 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 09:32:50 crc kubenswrapper[4856]: I1203 09:32:50.900902 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.031204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.031628 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.031746 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cvsl\" (UniqueName: \"kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.031948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.032129 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.032213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0\") pod \"5db70cd6-4b7f-4586-8f81-37066c3ef690\" (UID: \"5db70cd6-4b7f-4586-8f81-37066c3ef690\") " Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.035717 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.035942 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.042183 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl" (OuterVolumeSpecName: "kube-api-access-5cvsl") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "kube-api-access-5cvsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.103953 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.113947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.113994 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config" (OuterVolumeSpecName: "config") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.116054 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.136146 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cvsl\" (UniqueName: \"kubernetes.io/projected/5db70cd6-4b7f-4586-8f81-37066c3ef690-kube-api-access-5cvsl\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.136196 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.137425 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.137451 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.137465 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.142135 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5db70cd6-4b7f-4586-8f81-37066c3ef690" (UID: "5db70cd6-4b7f-4586-8f81-37066c3ef690"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.240171 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5db70cd6-4b7f-4586-8f81-37066c3ef690-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.540682 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" event={"ID":"5db70cd6-4b7f-4586-8f81-37066c3ef690","Type":"ContainerDied","Data":"e4b357c4c453106eb6bb7bc77885eb517bbb47ea322022cf6b506ce0eb82b26f"} Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.540848 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cj2hk" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.541184 4856 scope.go:117] "RemoveContainer" containerID="9848719d68973976d64288dc87eb709b07be96f0b7cc3cdc30176f4d28a1ce98" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.567992 4856 scope.go:117] "RemoveContainer" containerID="3812b55f5fafcd6902bec9b9bbf41673d09b01072e5d25d9ea9026640bbc2927" Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.782891 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:32:51 crc kubenswrapper[4856]: I1203 09:32:51.793297 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cj2hk"] Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.115655 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.124152 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.262184 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts\") pod \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.262475 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle\") pod \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.262570 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts\") pod \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.262688 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xp7h\" (UniqueName: \"kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h\") pod \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.262902 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtrzw\" (UniqueName: \"kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw\") pod \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.263074 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data\") pod \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\" (UID: \"6bb4a4f4-8246-453d-a39d-db70774f8e5b\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.263206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data\") pod \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.263433 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle\") pod \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\" (UID: \"7cf2dec0-9031-43e3-8f9f-90b36ce4b786\") " Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.269751 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts" (OuterVolumeSpecName: "scripts") pod "6bb4a4f4-8246-453d-a39d-db70774f8e5b" (UID: "6bb4a4f4-8246-453d-a39d-db70774f8e5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.270090 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h" (OuterVolumeSpecName: "kube-api-access-9xp7h") pod "7cf2dec0-9031-43e3-8f9f-90b36ce4b786" (UID: "7cf2dec0-9031-43e3-8f9f-90b36ce4b786"). InnerVolumeSpecName "kube-api-access-9xp7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.270698 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts" (OuterVolumeSpecName: "scripts") pod "7cf2dec0-9031-43e3-8f9f-90b36ce4b786" (UID: "7cf2dec0-9031-43e3-8f9f-90b36ce4b786"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.274069 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw" (OuterVolumeSpecName: "kube-api-access-mtrzw") pod "6bb4a4f4-8246-453d-a39d-db70774f8e5b" (UID: "6bb4a4f4-8246-453d-a39d-db70774f8e5b"). InnerVolumeSpecName "kube-api-access-mtrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.301822 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cf2dec0-9031-43e3-8f9f-90b36ce4b786" (UID: "7cf2dec0-9031-43e3-8f9f-90b36ce4b786"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.304702 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bb4a4f4-8246-453d-a39d-db70774f8e5b" (UID: "6bb4a4f4-8246-453d-a39d-db70774f8e5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.325655 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data" (OuterVolumeSpecName: "config-data") pod "6bb4a4f4-8246-453d-a39d-db70774f8e5b" (UID: "6bb4a4f4-8246-453d-a39d-db70774f8e5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.333179 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data" (OuterVolumeSpecName: "config-data") pod "7cf2dec0-9031-43e3-8f9f-90b36ce4b786" (UID: "7cf2dec0-9031-43e3-8f9f-90b36ce4b786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366650 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366695 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366706 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366715 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366724 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xp7h\" (UniqueName: \"kubernetes.io/projected/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-kube-api-access-9xp7h\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366736 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtrzw\" (UniqueName: \"kubernetes.io/projected/6bb4a4f4-8246-453d-a39d-db70774f8e5b-kube-api-access-mtrzw\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366744 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bb4a4f4-8246-453d-a39d-db70774f8e5b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.366753 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf2dec0-9031-43e3-8f9f-90b36ce4b786-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.559343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smxjg" event={"ID":"7cf2dec0-9031-43e3-8f9f-90b36ce4b786","Type":"ContainerDied","Data":"4681bbab2df4630902ea91a0b891280566d9145b981d61502810dda2f47277cd"} Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.559427 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4681bbab2df4630902ea91a0b891280566d9145b981d61502810dda2f47277cd" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.559816 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smxjg" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.562910 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5sgp5" event={"ID":"6bb4a4f4-8246-453d-a39d-db70774f8e5b","Type":"ContainerDied","Data":"ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c"} Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.562975 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee0b3b195e5a3a8c71cca33d7edc3dd26b309fbc4d30e7b1f55a47ad4c6fe27c" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.563091 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5sgp5" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.705648 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" path="/var/lib/kubelet/pods/5db70cd6-4b7f-4586-8f81-37066c3ef690/volumes" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.758723 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.758847 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.854123 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 09:32:52 crc kubenswrapper[4856]: E1203 09:32:52.854901 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf2dec0-9031-43e3-8f9f-90b36ce4b786" containerName="nova-cell1-conductor-db-sync" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.854923 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf2dec0-9031-43e3-8f9f-90b36ce4b786" containerName="nova-cell1-conductor-db-sync" Dec 03 09:32:52 crc kubenswrapper[4856]: E1203 09:32:52.854987 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb4a4f4-8246-453d-a39d-db70774f8e5b" containerName="nova-manage" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.854995 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb4a4f4-8246-453d-a39d-db70774f8e5b" containerName="nova-manage" Dec 03 09:32:52 crc kubenswrapper[4856]: E1203 09:32:52.855011 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="init" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.855019 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="init" Dec 03 09:32:52 crc kubenswrapper[4856]: E1203 09:32:52.855036 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="dnsmasq-dns" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.855046 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="dnsmasq-dns" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.855326 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf2dec0-9031-43e3-8f9f-90b36ce4b786" containerName="nova-cell1-conductor-db-sync" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.855373 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db70cd6-4b7f-4586-8f81-37066c3ef690" containerName="dnsmasq-dns" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.855399 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb4a4f4-8246-453d-a39d-db70774f8e5b" containerName="nova-manage" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.856543 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.865133 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.871790 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 09:32:52 crc kubenswrapper[4856]: I1203 09:32:52.999630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9g6k\" (UniqueName: \"kubernetes.io/projected/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-kube-api-access-j9g6k\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.000039 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.000310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.036002 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.036432 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-log" containerID="cri-o://6259f5efda47708ddb67a4cf560fe3ef32c96f3929ab8f3c879ca7307733e840" gracePeriod=30 Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.036655 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-api" containerID="cri-o://1bb0e1c52ada30b20ead5f9de883fb4a2e3f0a96310bbc8f71d481540471be1f" gracePeriod=30 Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.054720 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.059404 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" containerName="nova-scheduler-scheduler" containerID="cri-o://b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" gracePeriod=30 Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.105649 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9g6k\" (UniqueName: \"kubernetes.io/projected/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-kube-api-access-j9g6k\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.105714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.108401 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.114374 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.123139 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.125678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9g6k\" (UniqueName: \"kubernetes.io/projected/8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b-kube-api-access-j9g6k\") pod \"nova-cell1-conductor-0\" (UID: \"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.178421 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.576412 4856 generic.go:334] "Generic (PLEG): container finished" podID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerID="6259f5efda47708ddb67a4cf560fe3ef32c96f3929ab8f3c879ca7307733e840" exitCode=143 Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.577022 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerDied","Data":"6259f5efda47708ddb67a4cf560fe3ef32c96f3929ab8f3c879ca7307733e840"} Dec 03 09:32:53 crc kubenswrapper[4856]: I1203 09:32:53.802639 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 09:32:53 crc kubenswrapper[4856]: W1203 09:32:53.805848 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b03be8b_cf3f_4194_9ebb_1bd5d2a91e6b.slice/crio-fc85b3bf747bf421935a83d994b1e900033f3f451afb58cc24b308e8d0b8436c WatchSource:0}: Error finding container fc85b3bf747bf421935a83d994b1e900033f3f451afb58cc24b308e8d0b8436c: Status 404 returned error can't find the container with id fc85b3bf747bf421935a83d994b1e900033f3f451afb58cc24b308e8d0b8436c Dec 03 09:32:54 crc kubenswrapper[4856]: I1203 09:32:54.624567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b","Type":"ContainerStarted","Data":"1965ac2887acbc6944a97fcf60e256ff67bff19e18e8fec3f71fc8fce9577488"} Dec 03 09:32:54 crc kubenswrapper[4856]: I1203 09:32:54.624629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b","Type":"ContainerStarted","Data":"fc85b3bf747bf421935a83d994b1e900033f3f451afb58cc24b308e8d0b8436c"} Dec 03 09:32:54 crc kubenswrapper[4856]: I1203 09:32:54.625320 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 03 09:32:54 crc kubenswrapper[4856]: I1203 09:32:54.656557 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.65653524 podStartE2EDuration="2.65653524s" podCreationTimestamp="2025-12-03 09:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:54.654395166 +0000 UTC m=+1242.837287467" watchObservedRunningTime="2025-12-03 09:32:54.65653524 +0000 UTC m=+1242.839427541" Dec 03 09:32:54 crc kubenswrapper[4856]: E1203 09:32:54.959058 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:32:54 crc kubenswrapper[4856]: E1203 09:32:54.961266 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:32:54 crc kubenswrapper[4856]: E1203 09:32:54.963525 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:32:54 crc kubenswrapper[4856]: E1203 09:32:54.963609 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" containerName="nova-scheduler-scheduler" Dec 03 09:32:56 crc kubenswrapper[4856]: I1203 09:32:56.477304 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 09:32:56 crc kubenswrapper[4856]: I1203 09:32:56.652567 4856 generic.go:334] "Generic (PLEG): container finished" podID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerID="1bb0e1c52ada30b20ead5f9de883fb4a2e3f0a96310bbc8f71d481540471be1f" exitCode=0 Dec 03 09:32:56 crc kubenswrapper[4856]: I1203 09:32:56.652672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerDied","Data":"1bb0e1c52ada30b20ead5f9de883fb4a2e3f0a96310bbc8f71d481540471be1f"} Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.101392 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.175641 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle\") pod \"ea8de6f3-82dc-4897-8036-611d501bfa17\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.175869 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data\") pod \"ea8de6f3-82dc-4897-8036-611d501bfa17\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.176084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4lb\" (UniqueName: \"kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb\") pod \"ea8de6f3-82dc-4897-8036-611d501bfa17\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.176127 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs\") pod \"ea8de6f3-82dc-4897-8036-611d501bfa17\" (UID: \"ea8de6f3-82dc-4897-8036-611d501bfa17\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.177170 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs" (OuterVolumeSpecName: "logs") pod "ea8de6f3-82dc-4897-8036-611d501bfa17" (UID: "ea8de6f3-82dc-4897-8036-611d501bfa17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.185038 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb" (OuterVolumeSpecName: "kube-api-access-hd4lb") pod "ea8de6f3-82dc-4897-8036-611d501bfa17" (UID: "ea8de6f3-82dc-4897-8036-611d501bfa17"). InnerVolumeSpecName "kube-api-access-hd4lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.226845 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data" (OuterVolumeSpecName: "config-data") pod "ea8de6f3-82dc-4897-8036-611d501bfa17" (UID: "ea8de6f3-82dc-4897-8036-611d501bfa17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.228624 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea8de6f3-82dc-4897-8036-611d501bfa17" (UID: "ea8de6f3-82dc-4897-8036-611d501bfa17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.281175 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.281225 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd4lb\" (UniqueName: \"kubernetes.io/projected/ea8de6f3-82dc-4897-8036-611d501bfa17-kube-api-access-hd4lb\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.281239 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea8de6f3-82dc-4897-8036-611d501bfa17-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.281250 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8de6f3-82dc-4897-8036-611d501bfa17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.323470 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.382925 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle\") pod \"6c2ca111-4677-4c62-85ca-92240c89b835\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.383271 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data\") pod \"6c2ca111-4677-4c62-85ca-92240c89b835\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.383380 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25qfh\" (UniqueName: \"kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh\") pod \"6c2ca111-4677-4c62-85ca-92240c89b835\" (UID: \"6c2ca111-4677-4c62-85ca-92240c89b835\") " Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.389085 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh" (OuterVolumeSpecName: "kube-api-access-25qfh") pod "6c2ca111-4677-4c62-85ca-92240c89b835" (UID: "6c2ca111-4677-4c62-85ca-92240c89b835"). InnerVolumeSpecName "kube-api-access-25qfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.412356 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data" (OuterVolumeSpecName: "config-data") pod "6c2ca111-4677-4c62-85ca-92240c89b835" (UID: "6c2ca111-4677-4c62-85ca-92240c89b835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.414960 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2ca111-4677-4c62-85ca-92240c89b835" (UID: "6c2ca111-4677-4c62-85ca-92240c89b835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.486303 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.486345 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2ca111-4677-4c62-85ca-92240c89b835-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.486357 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25qfh\" (UniqueName: \"kubernetes.io/projected/6c2ca111-4677-4c62-85ca-92240c89b835-kube-api-access-25qfh\") on node \"crc\" DevicePath \"\"" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.665459 4856 generic.go:334] "Generic (PLEG): container finished" podID="6c2ca111-4677-4c62-85ca-92240c89b835" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" exitCode=0 Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.665515 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.665538 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c2ca111-4677-4c62-85ca-92240c89b835","Type":"ContainerDied","Data":"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7"} Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.666123 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c2ca111-4677-4c62-85ca-92240c89b835","Type":"ContainerDied","Data":"4b3bcbc1ec6dfe43f869bc649c0490e456097bb593955b30f9f6e2baf6ae7247"} Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.666147 4856 scope.go:117] "RemoveContainer" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.670759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ea8de6f3-82dc-4897-8036-611d501bfa17","Type":"ContainerDied","Data":"23f433998a333f8c93f7d87d2065820bde549fefae0901626f573f3625dc3046"} Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.670822 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.700500 4856 scope.go:117] "RemoveContainer" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" Dec 03 09:32:57 crc kubenswrapper[4856]: E1203 09:32:57.701193 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7\": container with ID starting with b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7 not found: ID does not exist" containerID="b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.701257 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7"} err="failed to get container status \"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7\": rpc error: code = NotFound desc = could not find container \"b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7\": container with ID starting with b01a558ae64fea651d7b6c92ca885fd3dea255c372d01878517f5208c91608e7 not found: ID does not exist" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.701299 4856 scope.go:117] "RemoveContainer" containerID="1bb0e1c52ada30b20ead5f9de883fb4a2e3f0a96310bbc8f71d481540471be1f" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.710261 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.718630 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.726305 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.755841 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.780418 4856 scope.go:117] "RemoveContainer" containerID="6259f5efda47708ddb67a4cf560fe3ef32c96f3929ab8f3c879ca7307733e840" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.809879 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: E1203 09:32:57.810682 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" containerName="nova-scheduler-scheduler" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.810713 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" containerName="nova-scheduler-scheduler" Dec 03 09:32:57 crc kubenswrapper[4856]: E1203 09:32:57.810735 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-api" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.810744 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-api" Dec 03 09:32:57 crc kubenswrapper[4856]: E1203 09:32:57.810780 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-log" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.810788 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-log" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.811093 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-api" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.811124 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" containerName="nova-scheduler-scheduler" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.811152 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" containerName="nova-api-log" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.812528 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.815478 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.821077 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.824511 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.827069 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.857273 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.869548 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.899759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.899852 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.899957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46bm\" (UniqueName: \"kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.900025 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.900068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.900125 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:57 crc kubenswrapper[4856]: I1203 09:32:57.900594 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62z6w\" (UniqueName: \"kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.003969 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46bm\" (UniqueName: \"kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004067 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004114 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62z6w\" (UniqueName: \"kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004328 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.004364 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.005467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.009097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.010115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.011069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.021649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.024555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62z6w\" (UniqueName: \"kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w\") pod \"nova-scheduler-0\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.025276 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46bm\" (UniqueName: \"kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm\") pod \"nova-api-0\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.142276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.155433 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.669830 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:32:58 crc kubenswrapper[4856]: W1203 09:32:58.690134 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456fffdc_0a3d_4d0e_8b12_6b41f561890b.slice/crio-8a8fa54b62643d599415ee6f53ac134813f3d7194b11004ea9761bc6e58056db WatchSource:0}: Error finding container 8a8fa54b62643d599415ee6f53ac134813f3d7194b11004ea9761bc6e58056db: Status 404 returned error can't find the container with id 8a8fa54b62643d599415ee6f53ac134813f3d7194b11004ea9761bc6e58056db Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.709693 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2ca111-4677-4c62-85ca-92240c89b835" path="/var/lib/kubelet/pods/6c2ca111-4677-4c62-85ca-92240c89b835/volumes" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.710711 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8de6f3-82dc-4897-8036-611d501bfa17" path="/var/lib/kubelet/pods/ea8de6f3-82dc-4897-8036-611d501bfa17/volumes" Dec 03 09:32:58 crc kubenswrapper[4856]: I1203 09:32:58.712415 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:32:58 crc kubenswrapper[4856]: W1203 09:32:58.718540 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0b22308_083d_4f14_a30b_a5d02357cdaf.slice/crio-dbe8b147c4c206ac1331d2e28801e598c9cd4b73b31086684794efeaada8a572 WatchSource:0}: Error finding container dbe8b147c4c206ac1331d2e28801e598c9cd4b73b31086684794efeaada8a572: Status 404 returned error can't find the container with id dbe8b147c4c206ac1331d2e28801e598c9cd4b73b31086684794efeaada8a572 Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.707186 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerStarted","Data":"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297"} Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.707783 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerStarted","Data":"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d"} Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.707823 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerStarted","Data":"8a8fa54b62643d599415ee6f53ac134813f3d7194b11004ea9761bc6e58056db"} Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.709368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0b22308-083d-4f14-a30b-a5d02357cdaf","Type":"ContainerStarted","Data":"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b"} Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.709451 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0b22308-083d-4f14-a30b-a5d02357cdaf","Type":"ContainerStarted","Data":"dbe8b147c4c206ac1331d2e28801e598c9cd4b73b31086684794efeaada8a572"} Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.742105 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.742079832 podStartE2EDuration="2.742079832s" podCreationTimestamp="2025-12-03 09:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:59.73524758 +0000 UTC m=+1247.918139891" watchObservedRunningTime="2025-12-03 09:32:59.742079832 +0000 UTC m=+1247.924972133" Dec 03 09:32:59 crc kubenswrapper[4856]: I1203 09:32:59.770626 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.77060201 podStartE2EDuration="2.77060201s" podCreationTimestamp="2025-12-03 09:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:32:59.764595129 +0000 UTC m=+1247.947487460" watchObservedRunningTime="2025-12-03 09:32:59.77060201 +0000 UTC m=+1247.953494311" Dec 03 09:33:00 crc kubenswrapper[4856]: I1203 09:33:00.641455 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:00 crc kubenswrapper[4856]: I1203 09:33:00.642285 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" containerName="kube-state-metrics" containerID="cri-o://f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45" gracePeriod=30 Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.208318 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.397747 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k48mp\" (UniqueName: \"kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp\") pod \"6707abf6-3ddf-4cf5-91d7-10a6a229d274\" (UID: \"6707abf6-3ddf-4cf5-91d7-10a6a229d274\") " Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.406057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp" (OuterVolumeSpecName: "kube-api-access-k48mp") pod "6707abf6-3ddf-4cf5-91d7-10a6a229d274" (UID: "6707abf6-3ddf-4cf5-91d7-10a6a229d274"). InnerVolumeSpecName "kube-api-access-k48mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.501264 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k48mp\" (UniqueName: \"kubernetes.io/projected/6707abf6-3ddf-4cf5-91d7-10a6a229d274-kube-api-access-k48mp\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.730329 4856 generic.go:334] "Generic (PLEG): container finished" podID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" containerID="f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45" exitCode=2 Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.730603 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6707abf6-3ddf-4cf5-91d7-10a6a229d274","Type":"ContainerDied","Data":"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45"} Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.730630 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.730655 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6707abf6-3ddf-4cf5-91d7-10a6a229d274","Type":"ContainerDied","Data":"7750c1c5572e8f1b158cabca002605595a27230df28293432603cb0a6a8986cb"} Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.730718 4856 scope.go:117] "RemoveContainer" containerID="f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.761708 4856 scope.go:117] "RemoveContainer" containerID="f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45" Dec 03 09:33:01 crc kubenswrapper[4856]: E1203 09:33:01.762521 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45\": container with ID starting with f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45 not found: ID does not exist" containerID="f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.762565 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45"} err="failed to get container status \"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45\": rpc error: code = NotFound desc = could not find container \"f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45\": container with ID starting with f497d279f8bdb3ce83c840aa1e02c45ee0998aefcf7e9da23a791e23c44b8b45 not found: ID does not exist" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.767662 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.779795 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.791009 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:01 crc kubenswrapper[4856]: E1203 09:33:01.791676 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" containerName="kube-state-metrics" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.791704 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" containerName="kube-state-metrics" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.791995 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" containerName="kube-state-metrics" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.793013 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.795467 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.795715 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.801271 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.908990 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.909061 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.909121 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7x7g\" (UniqueName: \"kubernetes.io/projected/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-api-access-h7x7g\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:01 crc kubenswrapper[4856]: I1203 09:33:01.909198 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.011353 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7x7g\" (UniqueName: \"kubernetes.io/projected/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-api-access-h7x7g\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.011435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.011550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.011591 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.016979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.017115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.017545 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255f4336-240a-4793-88a0-a2f6da40c0b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.047068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7x7g\" (UniqueName: \"kubernetes.io/projected/255f4336-240a-4793-88a0-a2f6da40c0b8-kube-api-access-h7x7g\") pod \"kube-state-metrics-0\" (UID: \"255f4336-240a-4793-88a0-a2f6da40c0b8\") " pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.131006 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.603511 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.701913 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6707abf6-3ddf-4cf5-91d7-10a6a229d274" path="/var/lib/kubelet/pods/6707abf6-3ddf-4cf5-91d7-10a6a229d274/volumes" Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.702967 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.703308 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-central-agent" containerID="cri-o://2faeaabf7a368bd1df8f36bf868af9abb7540f888d9f4b6ed2a8923b9a185cbd" gracePeriod=30 Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.703477 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-notification-agent" containerID="cri-o://553957dcb991374ce9351eb70abb0e00a658a243979f55da766cc9746cc45117" gracePeriod=30 Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.703580 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="proxy-httpd" containerID="cri-o://734d87fbe647bc9c3bcd98f3c5945dc09efba759c2170c8ec585c5a6072c276f" gracePeriod=30 Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.703671 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="sg-core" containerID="cri-o://82a9ab95cbbbf7dec912f2d8c66e0c8eb619c7552888114cf2dfb4d0b62dce69" gracePeriod=30 Dec 03 09:33:02 crc kubenswrapper[4856]: I1203 09:33:02.745950 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"255f4336-240a-4793-88a0-a2f6da40c0b8","Type":"ContainerStarted","Data":"7407b1a6ef3ef0f0dee1dc703dfa28f6f7feb9f6d77b295427750b5b1a8c3380"} Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.142537 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.221114 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.759634 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"255f4336-240a-4793-88a0-a2f6da40c0b8","Type":"ContainerStarted","Data":"3872b44d912cee19ff5395fbc386021fb1dc0503f8eb175d9622463c9175334c"} Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766027 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerID="734d87fbe647bc9c3bcd98f3c5945dc09efba759c2170c8ec585c5a6072c276f" exitCode=0 Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766076 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerID="82a9ab95cbbbf7dec912f2d8c66e0c8eb619c7552888114cf2dfb4d0b62dce69" exitCode=2 Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766088 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerID="2faeaabf7a368bd1df8f36bf868af9abb7540f888d9f4b6ed2a8923b9a185cbd" exitCode=0 Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766117 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerDied","Data":"734d87fbe647bc9c3bcd98f3c5945dc09efba759c2170c8ec585c5a6072c276f"} Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766191 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerDied","Data":"82a9ab95cbbbf7dec912f2d8c66e0c8eb619c7552888114cf2dfb4d0b62dce69"} Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.766208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerDied","Data":"2faeaabf7a368bd1df8f36bf868af9abb7540f888d9f4b6ed2a8923b9a185cbd"} Dec 03 09:33:03 crc kubenswrapper[4856]: I1203 09:33:03.783347 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.410390499 podStartE2EDuration="2.783320599s" podCreationTimestamp="2025-12-03 09:33:01 +0000 UTC" firstStartedPulling="2025-12-03 09:33:02.629202638 +0000 UTC m=+1250.812094939" lastFinishedPulling="2025-12-03 09:33:03.002132738 +0000 UTC m=+1251.185025039" observedRunningTime="2025-12-03 09:33:03.780743644 +0000 UTC m=+1251.963635955" watchObservedRunningTime="2025-12-03 09:33:03.783320599 +0000 UTC m=+1251.966212900" Dec 03 09:33:04 crc kubenswrapper[4856]: I1203 09:33:04.777946 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 09:33:06 crc kubenswrapper[4856]: I1203 09:33:06.798257 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerID="553957dcb991374ce9351eb70abb0e00a658a243979f55da766cc9746cc45117" exitCode=0 Dec 03 09:33:06 crc kubenswrapper[4856]: I1203 09:33:06.798358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerDied","Data":"553957dcb991374ce9351eb70abb0e00a658a243979f55da766cc9746cc45117"} Dec 03 09:33:06 crc kubenswrapper[4856]: I1203 09:33:06.914251 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.043725 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nk5j\" (UniqueName: \"kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.043881 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044018 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044113 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044161 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044191 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044325 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd\") pod \"d3063012-52f3-452f-b4b7-24da113b1ba4\" (UID: \"d3063012-52f3-452f-b4b7-24da113b1ba4\") " Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.044867 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.045466 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.045500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.051831 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j" (OuterVolumeSpecName: "kube-api-access-9nk5j") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "kube-api-access-9nk5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.053161 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts" (OuterVolumeSpecName: "scripts") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.083154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.130838 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.147122 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nk5j\" (UniqueName: \"kubernetes.io/projected/d3063012-52f3-452f-b4b7-24da113b1ba4-kube-api-access-9nk5j\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.147152 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.147162 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3063012-52f3-452f-b4b7-24da113b1ba4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.147172 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.147181 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.156903 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data" (OuterVolumeSpecName: "config-data") pod "d3063012-52f3-452f-b4b7-24da113b1ba4" (UID: "d3063012-52f3-452f-b4b7-24da113b1ba4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.248324 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3063012-52f3-452f-b4b7-24da113b1ba4-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.812445 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3063012-52f3-452f-b4b7-24da113b1ba4","Type":"ContainerDied","Data":"e8698b494f79599a3eb2bd0e1be24246632bdb00018d50a02304bb2cb8ea8c59"} Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.812756 4856 scope.go:117] "RemoveContainer" containerID="734d87fbe647bc9c3bcd98f3c5945dc09efba759c2170c8ec585c5a6072c276f" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.812499 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.863275 4856 scope.go:117] "RemoveContainer" containerID="82a9ab95cbbbf7dec912f2d8c66e0c8eb619c7552888114cf2dfb4d0b62dce69" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.863467 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.884353 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.891777 4856 scope.go:117] "RemoveContainer" containerID="553957dcb991374ce9351eb70abb0e00a658a243979f55da766cc9746cc45117" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.910291 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:07 crc kubenswrapper[4856]: E1203 09:33:07.911700 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="sg-core" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.911734 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="sg-core" Dec 03 09:33:07 crc kubenswrapper[4856]: E1203 09:33:07.911769 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-central-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.911779 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-central-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: E1203 09:33:07.911861 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="proxy-httpd" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.911874 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="proxy-httpd" Dec 03 09:33:07 crc kubenswrapper[4856]: E1203 09:33:07.911916 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-notification-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.911926 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-notification-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.912539 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-central-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.912603 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="ceilometer-notification-agent" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.912643 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="proxy-httpd" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.912665 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" containerName="sg-core" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.917401 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.925212 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.927181 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.927193 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.929013 4856 scope.go:117] "RemoveContainer" containerID="2faeaabf7a368bd1df8f36bf868af9abb7540f888d9f4b6ed2a8923b9a185cbd" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.947134 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.965779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4lr\" (UniqueName: \"kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.965842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.965866 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.966063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.966189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.966213 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.966535 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:07 crc kubenswrapper[4856]: I1203 09:33:07.966676 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068153 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068231 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4lr\" (UniqueName: \"kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068305 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068334 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068366 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068397 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.068939 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.070561 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.075379 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.078252 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.078755 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.078827 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.078989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.087476 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4lr\" (UniqueName: \"kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr\") pod \"ceilometer-0\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.143411 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.155618 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.155668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.183603 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.263019 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.707293 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3063012-52f3-452f-b4b7-24da113b1ba4" path="/var/lib/kubelet/pods/d3063012-52f3-452f-b4b7-24da113b1ba4/volumes" Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.755951 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:08 crc kubenswrapper[4856]: W1203 09:33:08.760937 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810ebf87_5f39_4a72_801e_d2d8a888f180.slice/crio-2cbb77c2421103c17975919a2a13b936cd78f69d6ad7f03a70fdc0a927add9f6 WatchSource:0}: Error finding container 2cbb77c2421103c17975919a2a13b936cd78f69d6ad7f03a70fdc0a927add9f6: Status 404 returned error can't find the container with id 2cbb77c2421103c17975919a2a13b936cd78f69d6ad7f03a70fdc0a927add9f6 Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.829878 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerStarted","Data":"2cbb77c2421103c17975919a2a13b936cd78f69d6ad7f03a70fdc0a927add9f6"} Dec 03 09:33:08 crc kubenswrapper[4856]: I1203 09:33:08.856721 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 09:33:09 crc kubenswrapper[4856]: I1203 09:33:09.199172 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:09 crc kubenswrapper[4856]: I1203 09:33:09.199197 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:09 crc kubenswrapper[4856]: I1203 09:33:09.846334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerStarted","Data":"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6"} Dec 03 09:33:10 crc kubenswrapper[4856]: I1203 09:33:10.858164 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerStarted","Data":"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863"} Dec 03 09:33:11 crc kubenswrapper[4856]: I1203 09:33:11.871793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerStarted","Data":"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb"} Dec 03 09:33:12 crc kubenswrapper[4856]: I1203 09:33:12.148022 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 09:33:12 crc kubenswrapper[4856]: I1203 09:33:12.890143 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerStarted","Data":"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec"} Dec 03 09:33:12 crc kubenswrapper[4856]: I1203 09:33:12.891323 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:33:12 crc kubenswrapper[4856]: I1203 09:33:12.944664 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.480830209 podStartE2EDuration="5.944628956s" podCreationTimestamp="2025-12-03 09:33:07 +0000 UTC" firstStartedPulling="2025-12-03 09:33:08.763654341 +0000 UTC m=+1256.946546642" lastFinishedPulling="2025-12-03 09:33:12.227453078 +0000 UTC m=+1260.410345389" observedRunningTime="2025-12-03 09:33:12.927605557 +0000 UTC m=+1261.110497888" watchObservedRunningTime="2025-12-03 09:33:12.944628956 +0000 UTC m=+1261.127521257" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.923017 4856 generic.go:334] "Generic (PLEG): container finished" podID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" containerID="b96acb939b4cc0c318b8d5095eb6d6d124c01b211d726e06f946e5def63cf23f" exitCode=137 Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.923111 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85","Type":"ContainerDied","Data":"b96acb939b4cc0c318b8d5095eb6d6d124c01b211d726e06f946e5def63cf23f"} Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.923712 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85","Type":"ContainerDied","Data":"83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e"} Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.923740 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f54857558b4d419a428c11b8fe91e73461c2c6bfb16f0524b0a9d74ab0592e" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.931151 4856 generic.go:334] "Generic (PLEG): container finished" podID="3afba738-979a-4454-8a8b-cc59387b2814" containerID="9b91890994058ed2d1cea3a178dad0ce7f8f53d2ff22725ed24455e7d3427931" exitCode=137 Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.931204 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerDied","Data":"9b91890994058ed2d1cea3a178dad0ce7f8f53d2ff22725ed24455e7d3427931"} Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.931237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3afba738-979a-4454-8a8b-cc59387b2814","Type":"ContainerDied","Data":"a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31"} Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.931252 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a767890a031961c8524026f3f26ea2606369e33a9643d0bfd11f914d0da59c31" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.935660 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.945492 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.953750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle\") pod \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.953873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data\") pod \"3afba738-979a-4454-8a8b-cc59387b2814\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.953927 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle\") pod \"3afba738-979a-4454-8a8b-cc59387b2814\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.953993 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data\") pod \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.954018 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6qld\" (UniqueName: \"kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld\") pod \"3afba738-979a-4454-8a8b-cc59387b2814\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.954058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7tjk\" (UniqueName: \"kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk\") pod \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\" (UID: \"8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.954080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs\") pod \"3afba738-979a-4454-8a8b-cc59387b2814\" (UID: \"3afba738-979a-4454-8a8b-cc59387b2814\") " Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.954698 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs" (OuterVolumeSpecName: "logs") pod "3afba738-979a-4454-8a8b-cc59387b2814" (UID: "3afba738-979a-4454-8a8b-cc59387b2814"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:15 crc kubenswrapper[4856]: I1203 09:33:15.999331 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld" (OuterVolumeSpecName: "kube-api-access-q6qld") pod "3afba738-979a-4454-8a8b-cc59387b2814" (UID: "3afba738-979a-4454-8a8b-cc59387b2814"). InnerVolumeSpecName "kube-api-access-q6qld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:15.999724 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk" (OuterVolumeSpecName: "kube-api-access-f7tjk") pod "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" (UID: "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85"). InnerVolumeSpecName "kube-api-access-f7tjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.006340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" (UID: "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.016584 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data" (OuterVolumeSpecName: "config-data") pod "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" (UID: "8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.025089 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3afba738-979a-4454-8a8b-cc59387b2814" (UID: "3afba738-979a-4454-8a8b-cc59387b2814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.027365 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data" (OuterVolumeSpecName: "config-data") pod "3afba738-979a-4454-8a8b-cc59387b2814" (UID: "3afba738-979a-4454-8a8b-cc59387b2814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.056201 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.056496 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afba738-979a-4454-8a8b-cc59387b2814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.056695 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.056796 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6qld\" (UniqueName: \"kubernetes.io/projected/3afba738-979a-4454-8a8b-cc59387b2814-kube-api-access-q6qld\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.057026 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7tjk\" (UniqueName: \"kubernetes.io/projected/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-kube-api-access-f7tjk\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.057747 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afba738-979a-4454-8a8b-cc59387b2814-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.057908 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.938874 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.938876 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.969402 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:33:16 crc kubenswrapper[4856]: I1203 09:33:16.985715 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.000723 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.010382 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.020896 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: E1203 09:33:17.021660 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.021689 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 09:33:17 crc kubenswrapper[4856]: E1203 09:33:17.021731 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-log" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.021738 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-log" Dec 03 09:33:17 crc kubenswrapper[4856]: E1203 09:33:17.021755 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-metadata" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.021765 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-metadata" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.021988 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-log" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.021999 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afba738-979a-4454-8a8b-cc59387b2814" containerName="nova-metadata-metadata" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.022011 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.022986 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.028184 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.028195 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.028556 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.049987 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.062397 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.071037 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.075969 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmxp\" (UniqueName: \"kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076058 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076163 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076209 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076595 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.076743 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxq8t\" (UniqueName: \"kubernetes.io/projected/7ec5a006-1571-475d-8f44-d12cb737563b-kube-api-access-pxq8t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.080061 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.080272 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.084306 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.178515 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmxp\" (UniqueName: \"kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.178614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.178654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.178756 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.178794 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179120 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179151 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179223 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxq8t\" (UniqueName: \"kubernetes.io/projected/7ec5a006-1571-475d-8f44-d12cb737563b-kube-api-access-pxq8t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.179650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.184530 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.185386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.185472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.186574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.187333 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.196682 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.197937 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec5a006-1571-475d-8f44-d12cb737563b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.200316 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmxp\" (UniqueName: \"kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp\") pod \"nova-metadata-0\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.202649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxq8t\" (UniqueName: \"kubernetes.io/projected/7ec5a006-1571-475d-8f44-d12cb737563b-kube-api-access-pxq8t\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ec5a006-1571-475d-8f44-d12cb737563b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.350075 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.395302 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.872221 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.963868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ec5a006-1571-475d-8f44-d12cb737563b","Type":"ContainerStarted","Data":"c6f34db42d020467c237b9e0f1817e197765f3d90f3dbe764f0a9862252d4181"} Dec 03 09:33:17 crc kubenswrapper[4856]: I1203 09:33:17.988747 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.162472 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.163204 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.166446 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.167396 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.702978 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afba738-979a-4454-8a8b-cc59387b2814" path="/var/lib/kubelet/pods/3afba738-979a-4454-8a8b-cc59387b2814/volumes" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.703593 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85" path="/var/lib/kubelet/pods/8857ca9b-3b50-4d9f-a9ca-71e0dba1aa85/volumes" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.974220 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ec5a006-1571-475d-8f44-d12cb737563b","Type":"ContainerStarted","Data":"2a2470dec74b112ebcc66852c973799d97c1083665249160c9d9e5d815f0c72d"} Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.981234 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerStarted","Data":"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7"} Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.981278 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerStarted","Data":"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061"} Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.981291 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerStarted","Data":"d01d3c9ca64c30285825ff0d801f24f28d7973c54f2dabc7f0292f6b68efd0ea"} Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.981308 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.986144 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 09:33:18 crc kubenswrapper[4856]: I1203 09:33:18.998307 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.998284837 podStartE2EDuration="2.998284837s" podCreationTimestamp="2025-12-03 09:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:18.997633481 +0000 UTC m=+1267.180525772" watchObservedRunningTime="2025-12-03 09:33:18.998284837 +0000 UTC m=+1267.181177138" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.043498 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.043479325 podStartE2EDuration="3.043479325s" podCreationTimestamp="2025-12-03 09:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:19.037732941 +0000 UTC m=+1267.220625252" watchObservedRunningTime="2025-12-03 09:33:19.043479325 +0000 UTC m=+1267.226371626" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.184891 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.186772 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.197034 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337571 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337667 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337709 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337738 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337762 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9db\" (UniqueName: \"kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.337799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439135 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439207 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9db\" (UniqueName: \"kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.439390 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.440188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.440438 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.440575 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.440662 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.440938 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.470494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9db\" (UniqueName: \"kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db\") pod \"dnsmasq-dns-cd5cbd7b9-p9qvz\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:19 crc kubenswrapper[4856]: I1203 09:33:19.514645 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:20 crc kubenswrapper[4856]: I1203 09:33:20.145678 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.002808 4856 generic.go:334] "Generic (PLEG): container finished" podID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerID="f39d6c8ba2341e15ac1d20051c8481a247e95f9d9a1ba29c619326cc4dcd2560" exitCode=0 Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.002942 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" event={"ID":"2e0b7933-046b-4eb4-aa43-cc3edf08dba1","Type":"ContainerDied","Data":"f39d6c8ba2341e15ac1d20051c8481a247e95f9d9a1ba29c619326cc4dcd2560"} Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.003644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" event={"ID":"2e0b7933-046b-4eb4-aa43-cc3edf08dba1","Type":"ContainerStarted","Data":"45498da0fefe99ea9b45751cd7d6ef551b34482e24c85f42192492c016c23178"} Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.480027 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.480949 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-central-agent" containerID="cri-o://e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6" gracePeriod=30 Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.481026 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="sg-core" containerID="cri-o://219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb" gracePeriod=30 Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.481049 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-notification-agent" containerID="cri-o://dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863" gracePeriod=30 Dec 03 09:33:21 crc kubenswrapper[4856]: I1203 09:33:21.481085 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="proxy-httpd" containerID="cri-o://0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec" gracePeriod=30 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.017089 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" event={"ID":"2e0b7933-046b-4eb4-aa43-cc3edf08dba1","Type":"ContainerStarted","Data":"01cf34104b690591aaffb0eab69330bfb8fe36c8ace22583ef662ebf7aa4fdc0"} Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.017469 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.020930 4856 generic.go:334] "Generic (PLEG): container finished" podID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerID="0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec" exitCode=0 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.020975 4856 generic.go:334] "Generic (PLEG): container finished" podID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerID="219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb" exitCode=2 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.020987 4856 generic.go:334] "Generic (PLEG): container finished" podID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerID="e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6" exitCode=0 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.021035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerDied","Data":"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec"} Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.021126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerDied","Data":"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb"} Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.021145 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerDied","Data":"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6"} Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.043589 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" podStartSLOduration=3.043563066 podStartE2EDuration="3.043563066s" podCreationTimestamp="2025-12-03 09:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:22.042481999 +0000 UTC m=+1270.225374310" watchObservedRunningTime="2025-12-03 09:33:22.043563066 +0000 UTC m=+1270.226455367" Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.067236 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.067550 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-log" containerID="cri-o://d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d" gracePeriod=30 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.067768 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-api" containerID="cri-o://cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297" gracePeriod=30 Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.351066 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.395623 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.395689 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.759096 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:33:22 crc kubenswrapper[4856]: I1203 09:33:22.759167 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.032282 4856 generic.go:334] "Generic (PLEG): container finished" podID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerID="d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d" exitCode=143 Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.032366 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerDied","Data":"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d"} Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.676740 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.752079 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4lr\" (UniqueName: \"kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.752217 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.752260 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.752316 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.753366 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.753420 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.753492 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.753558 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts\") pod \"810ebf87-5f39-4a72-801e-d2d8a888f180\" (UID: \"810ebf87-5f39-4a72-801e-d2d8a888f180\") " Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.759970 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr" (OuterVolumeSpecName: "kube-api-access-zn4lr") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "kube-api-access-zn4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.774039 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.774230 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4lr\" (UniqueName: \"kubernetes.io/projected/810ebf87-5f39-4a72-801e-d2d8a888f180-kube-api-access-zn4lr\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.777013 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts" (OuterVolumeSpecName: "scripts") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.783088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.840251 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.877013 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.877860 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.877965 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810ebf87-5f39-4a72-801e-d2d8a888f180-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.878083 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.878669 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.908728 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.914953 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data" (OuterVolumeSpecName: "config-data") pod "810ebf87-5f39-4a72-801e-d2d8a888f180" (UID: "810ebf87-5f39-4a72-801e-d2d8a888f180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.980160 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.980216 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:23 crc kubenswrapper[4856]: I1203 09:33:23.980228 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810ebf87-5f39-4a72-801e-d2d8a888f180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.045916 4856 generic.go:334] "Generic (PLEG): container finished" podID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerID="dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863" exitCode=0 Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.045965 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerDied","Data":"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863"} Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.045994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810ebf87-5f39-4a72-801e-d2d8a888f180","Type":"ContainerDied","Data":"2cbb77c2421103c17975919a2a13b936cd78f69d6ad7f03a70fdc0a927add9f6"} Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.045996 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.046010 4856 scope.go:117] "RemoveContainer" containerID="0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.079953 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.081674 4856 scope.go:117] "RemoveContainer" containerID="219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.090617 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.111289 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.112483 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-notification-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112526 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-notification-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.112543 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="proxy-httpd" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112554 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="proxy-httpd" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.112577 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-central-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112586 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-central-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.112617 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="sg-core" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112624 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="sg-core" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112900 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="proxy-httpd" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112924 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="sg-core" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112935 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-central-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.112951 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" containerName="ceilometer-notification-agent" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.115685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.118202 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.119048 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.119625 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.129544 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.130983 4856 scope.go:117] "RemoveContainer" containerID="dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.161854 4856 scope.go:117] "RemoveContainer" containerID="e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.185515 4856 scope.go:117] "RemoveContainer" containerID="0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.186033 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec\": container with ID starting with 0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec not found: ID does not exist" containerID="0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.186095 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec"} err="failed to get container status \"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec\": rpc error: code = NotFound desc = could not find container \"0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec\": container with ID starting with 0a60bb007b7c0a900a6ce4ec33843d34f8d9e49043168ca250215be28c8773ec not found: ID does not exist" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.186130 4856 scope.go:117] "RemoveContainer" containerID="219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.186443 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb\": container with ID starting with 219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb not found: ID does not exist" containerID="219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.186473 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb"} err="failed to get container status \"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb\": rpc error: code = NotFound desc = could not find container \"219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb\": container with ID starting with 219995c4da0b044ac2ac087c13aa86e7267e6face7d3ca3d01e5fef4ddea77fb not found: ID does not exist" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.186497 4856 scope.go:117] "RemoveContainer" containerID="dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.187002 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863\": container with ID starting with dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863 not found: ID does not exist" containerID="dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.187023 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863"} err="failed to get container status \"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863\": rpc error: code = NotFound desc = could not find container \"dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863\": container with ID starting with dc9d24efb57d5099db1d5ca2142045ed313525304f13df2ba92e970bbe152863 not found: ID does not exist" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.187035 4856 scope.go:117] "RemoveContainer" containerID="e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6" Dec 03 09:33:24 crc kubenswrapper[4856]: E1203 09:33:24.187382 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6\": container with ID starting with e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6 not found: ID does not exist" containerID="e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.187403 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6"} err="failed to get container status \"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6\": rpc error: code = NotFound desc = could not find container \"e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6\": container with ID starting with e335da398cc7b2e6f7623c1dc06f09efddfc4e50016b6998b1770a6928e7b6c6 not found: ID does not exist" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.285977 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286046 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286142 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-scripts\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286277 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-config-data\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286316 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-log-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286336 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286360 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-run-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.286378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfw8p\" (UniqueName: \"kubernetes.io/projected/bf8c7439-4ac0-4c40-8d21-7804cb6010df-kube-api-access-dfw8p\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388134 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-scripts\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388249 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-config-data\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-log-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388316 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-run-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfw8p\" (UniqueName: \"kubernetes.io/projected/bf8c7439-4ac0-4c40-8d21-7804cb6010df-kube-api-access-dfw8p\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.388784 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-log-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.389438 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf8c7439-4ac0-4c40-8d21-7804cb6010df-run-httpd\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.392705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-scripts\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.392892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.393289 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.395087 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.395711 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf8c7439-4ac0-4c40-8d21-7804cb6010df-config-data\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.411071 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfw8p\" (UniqueName: \"kubernetes.io/projected/bf8c7439-4ac0-4c40-8d21-7804cb6010df-kube-api-access-dfw8p\") pod \"ceilometer-0\" (UID: \"bf8c7439-4ac0-4c40-8d21-7804cb6010df\") " pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.437364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.705332 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810ebf87-5f39-4a72-801e-d2d8a888f180" path="/var/lib/kubelet/pods/810ebf87-5f39-4a72-801e-d2d8a888f180/volumes" Dec 03 09:33:24 crc kubenswrapper[4856]: I1203 09:33:24.890748 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 09:33:24 crc kubenswrapper[4856]: W1203 09:33:24.896333 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf8c7439_4ac0_4c40_8d21_7804cb6010df.slice/crio-94d1f3c66c70c2bd28ec6f1af1e9f886221f1e02dd6b835214b5bd64d6cc51fd WatchSource:0}: Error finding container 94d1f3c66c70c2bd28ec6f1af1e9f886221f1e02dd6b835214b5bd64d6cc51fd: Status 404 returned error can't find the container with id 94d1f3c66c70c2bd28ec6f1af1e9f886221f1e02dd6b835214b5bd64d6cc51fd Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.057125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf8c7439-4ac0-4c40-8d21-7804cb6010df","Type":"ContainerStarted","Data":"94d1f3c66c70c2bd28ec6f1af1e9f886221f1e02dd6b835214b5bd64d6cc51fd"} Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.675265 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.824038 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle\") pod \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.824147 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46bm\" (UniqueName: \"kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm\") pod \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.824176 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs\") pod \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.824206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data\") pod \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\" (UID: \"456fffdc-0a3d-4d0e-8b12-6b41f561890b\") " Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.825045 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs" (OuterVolumeSpecName: "logs") pod "456fffdc-0a3d-4d0e-8b12-6b41f561890b" (UID: "456fffdc-0a3d-4d0e-8b12-6b41f561890b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.827256 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/456fffdc-0a3d-4d0e-8b12-6b41f561890b-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.829593 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm" (OuterVolumeSpecName: "kube-api-access-s46bm") pod "456fffdc-0a3d-4d0e-8b12-6b41f561890b" (UID: "456fffdc-0a3d-4d0e-8b12-6b41f561890b"). InnerVolumeSpecName "kube-api-access-s46bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.853091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "456fffdc-0a3d-4d0e-8b12-6b41f561890b" (UID: "456fffdc-0a3d-4d0e-8b12-6b41f561890b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.856988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data" (OuterVolumeSpecName: "config-data") pod "456fffdc-0a3d-4d0e-8b12-6b41f561890b" (UID: "456fffdc-0a3d-4d0e-8b12-6b41f561890b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.929056 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.929371 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46bm\" (UniqueName: \"kubernetes.io/projected/456fffdc-0a3d-4d0e-8b12-6b41f561890b-kube-api-access-s46bm\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:25 crc kubenswrapper[4856]: I1203 09:33:25.929383 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fffdc-0a3d-4d0e-8b12-6b41f561890b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.072153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf8c7439-4ac0-4c40-8d21-7804cb6010df","Type":"ContainerStarted","Data":"aa33a8737e1f6026850fbf3d5082898dd5807fc48df688d8308e45f830f743f1"} Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.076443 4856 generic.go:334] "Generic (PLEG): container finished" podID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerID="cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297" exitCode=0 Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.076516 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerDied","Data":"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297"} Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.076560 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.076617 4856 scope.go:117] "RemoveContainer" containerID="cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.076575 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"456fffdc-0a3d-4d0e-8b12-6b41f561890b","Type":"ContainerDied","Data":"8a8fa54b62643d599415ee6f53ac134813f3d7194b11004ea9761bc6e58056db"} Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.113062 4856 scope.go:117] "RemoveContainer" containerID="d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.120982 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.133998 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.158167 4856 scope.go:117] "RemoveContainer" containerID="cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297" Dec 03 09:33:26 crc kubenswrapper[4856]: E1203 09:33:26.162158 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297\": container with ID starting with cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297 not found: ID does not exist" containerID="cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.162236 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297"} err="failed to get container status \"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297\": rpc error: code = NotFound desc = could not find container \"cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297\": container with ID starting with cf34083e5a9200316d62afb05d78688cd682de7ce30ed06647ba545100b3b297 not found: ID does not exist" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.162301 4856 scope.go:117] "RemoveContainer" containerID="d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d" Dec 03 09:33:26 crc kubenswrapper[4856]: E1203 09:33:26.163896 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d\": container with ID starting with d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d not found: ID does not exist" containerID="d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.163968 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d"} err="failed to get container status \"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d\": rpc error: code = NotFound desc = could not find container \"d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d\": container with ID starting with d8301da63bedb83115c014f537164885cf6469793fe7cc187e3ca4e7206fa96d not found: ID does not exist" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.176173 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:26 crc kubenswrapper[4856]: E1203 09:33:26.177126 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-api" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.177145 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-api" Dec 03 09:33:26 crc kubenswrapper[4856]: E1203 09:33:26.177184 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-log" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.177192 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-log" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.177657 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-api" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.177706 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" containerName="nova-api-log" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.179637 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.191285 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.191670 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.192062 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.243065 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.357784 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.358171 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxw5\" (UniqueName: \"kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.358228 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.358262 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.358447 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.358660 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxw5\" (UniqueName: \"kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460505 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460554 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.460618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.464375 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.465856 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.469388 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.470091 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.470426 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.484178 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxw5\" (UniqueName: \"kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5\") pod \"nova-api-0\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.549486 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:26 crc kubenswrapper[4856]: I1203 09:33:26.712151 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456fffdc-0a3d-4d0e-8b12-6b41f561890b" path="/var/lib/kubelet/pods/456fffdc-0a3d-4d0e-8b12-6b41f561890b/volumes" Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.046738 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.088590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf8c7439-4ac0-4c40-8d21-7804cb6010df","Type":"ContainerStarted","Data":"03a898cdaf34e6be122a06c0665b8f86b9b2b906a4a3a86c129c14719334e435"} Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.090726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerStarted","Data":"5f173ffd282c004f5cea724d3b07967a3c7024134310385063cd2c953c270c32"} Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.351067 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.374860 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.396395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 09:33:27 crc kubenswrapper[4856]: I1203 09:33:27.396446 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.111211 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf8c7439-4ac0-4c40-8d21-7804cb6010df","Type":"ContainerStarted","Data":"3dbdc9ddf0cc4fad1b1bd8f2e6f8918556f9873e003c844089daaaf4c1228a96"} Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.120018 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerStarted","Data":"d83f8dda33e53f052e379cdd2752468340b43c9cb312c60f1fca03a57d141515"} Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.120082 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerStarted","Data":"67b377f19e77fa9ef4238737d426f20fa9a4793c9a5f9fbb5fc5abc0bd88c526"} Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.157825 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.157787559 podStartE2EDuration="2.157787559s" podCreationTimestamp="2025-12-03 09:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:28.139358265 +0000 UTC m=+1276.322250566" watchObservedRunningTime="2025-12-03 09:33:28.157787559 +0000 UTC m=+1276.340679860" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.163409 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.382251 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tz5lq"] Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.383754 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.386547 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.389716 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.397860 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tz5lq"] Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.548249 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.548413 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlclp\" (UniqueName: \"kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.548509 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.548538 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.557139 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.557440 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.654925 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.654994 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.655054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.655168 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlclp\" (UniqueName: \"kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.666838 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.667710 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.685424 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlclp\" (UniqueName: \"kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.688411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts\") pod \"nova-cell1-cell-mapping-tz5lq\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:28 crc kubenswrapper[4856]: I1203 09:33:28.872875 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:29 crc kubenswrapper[4856]: W1203 09:33:29.375255 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1f0d9_6c77_4dab_b894_40c419c10324.slice/crio-ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6 WatchSource:0}: Error finding container ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6: Status 404 returned error can't find the container with id ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6 Dec 03 09:33:29 crc kubenswrapper[4856]: I1203 09:33:29.383350 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tz5lq"] Dec 03 09:33:29 crc kubenswrapper[4856]: I1203 09:33:29.517987 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:33:29 crc kubenswrapper[4856]: I1203 09:33:29.617921 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:33:29 crc kubenswrapper[4856]: I1203 09:33:29.618451 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-294px" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="dnsmasq-dns" containerID="cri-o://32af87210c51d65938f2028acf3b804d714fff362c9d1c6cbba1c5be49ff117d" gracePeriod=10 Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.148921 4856 generic.go:334] "Generic (PLEG): container finished" podID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerID="32af87210c51d65938f2028acf3b804d714fff362c9d1c6cbba1c5be49ff117d" exitCode=0 Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.149010 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerDied","Data":"32af87210c51d65938f2028acf3b804d714fff362c9d1c6cbba1c5be49ff117d"} Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.153100 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf8c7439-4ac0-4c40-8d21-7804cb6010df","Type":"ContainerStarted","Data":"46b8c334076f9be61dad305a637dcccefd5a5e67e2339937d83f129e40b68abe"} Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.154669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tz5lq" event={"ID":"2fe1f0d9-6c77-4dab-b894-40c419c10324","Type":"ContainerStarted","Data":"9bb171008f694d76e4b57a89d46ae637f59c5e443e24e8d1ed158e0dcfbfc77e"} Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.154840 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tz5lq" event={"ID":"2fe1f0d9-6c77-4dab-b894-40c419c10324","Type":"ContainerStarted","Data":"ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6"} Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.155986 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.174195 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-294px" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.188314 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tz5lq" podStartSLOduration=2.188291776 podStartE2EDuration="2.188291776s" podCreationTimestamp="2025-12-03 09:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:30.177213917 +0000 UTC m=+1278.360106218" watchObservedRunningTime="2025-12-03 09:33:30.188291776 +0000 UTC m=+1278.371184097" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.199946 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.23724828 podStartE2EDuration="6.199929129s" podCreationTimestamp="2025-12-03 09:33:24 +0000 UTC" firstStartedPulling="2025-12-03 09:33:24.898384739 +0000 UTC m=+1273.081277040" lastFinishedPulling="2025-12-03 09:33:28.861065588 +0000 UTC m=+1277.043957889" observedRunningTime="2025-12-03 09:33:30.198596565 +0000 UTC m=+1278.381488866" watchObservedRunningTime="2025-12-03 09:33:30.199929129 +0000 UTC m=+1278.382821430" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.766835 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872097 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872288 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxj2\" (UniqueName: \"kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872465 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.872703 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0\") pod \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\" (UID: \"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c\") " Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.894756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2" (OuterVolumeSpecName: "kube-api-access-qnxj2") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "kube-api-access-qnxj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.949735 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.971822 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.975396 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnxj2\" (UniqueName: \"kubernetes.io/projected/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-kube-api-access-qnxj2\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.975440 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.975451 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:30 crc kubenswrapper[4856]: I1203 09:33:30.976154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.003605 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.025505 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config" (OuterVolumeSpecName: "config") pod "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" (UID: "0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.080236 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.080282 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.080294 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.169254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-294px" event={"ID":"0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c","Type":"ContainerDied","Data":"2881b17e0185bbac7dfc0077869fdc4fd43b74613c9d7c8e200ca0601add8923"} Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.169327 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-294px" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.169357 4856 scope.go:117] "RemoveContainer" containerID="32af87210c51d65938f2028acf3b804d714fff362c9d1c6cbba1c5be49ff117d" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.195214 4856 scope.go:117] "RemoveContainer" containerID="6f1aa7610445ce995ac96720e5f238170417bd237eb515195733ffe495aa0e18" Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.229603 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:33:31 crc kubenswrapper[4856]: I1203 09:33:31.244048 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-294px"] Dec 03 09:33:32 crc kubenswrapper[4856]: I1203 09:33:32.704186 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" path="/var/lib/kubelet/pods/0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c/volumes" Dec 03 09:33:36 crc kubenswrapper[4856]: I1203 09:33:36.261924 4856 generic.go:334] "Generic (PLEG): container finished" podID="2fe1f0d9-6c77-4dab-b894-40c419c10324" containerID="9bb171008f694d76e4b57a89d46ae637f59c5e443e24e8d1ed158e0dcfbfc77e" exitCode=0 Dec 03 09:33:36 crc kubenswrapper[4856]: I1203 09:33:36.262924 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tz5lq" event={"ID":"2fe1f0d9-6c77-4dab-b894-40c419c10324","Type":"ContainerDied","Data":"9bb171008f694d76e4b57a89d46ae637f59c5e443e24e8d1ed158e0dcfbfc77e"} Dec 03 09:33:36 crc kubenswrapper[4856]: I1203 09:33:36.550734 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:36 crc kubenswrapper[4856]: I1203 09:33:36.550785 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.406602 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.416761 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.419182 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.567981 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.568081 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.660609 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.665498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts\") pod \"2fe1f0d9-6c77-4dab-b894-40c419c10324\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.665876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle\") pod \"2fe1f0d9-6c77-4dab-b894-40c419c10324\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.665991 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlclp\" (UniqueName: \"kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp\") pod \"2fe1f0d9-6c77-4dab-b894-40c419c10324\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.666093 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data\") pod \"2fe1f0d9-6c77-4dab-b894-40c419c10324\" (UID: \"2fe1f0d9-6c77-4dab-b894-40c419c10324\") " Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.677031 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts" (OuterVolumeSpecName: "scripts") pod "2fe1f0d9-6c77-4dab-b894-40c419c10324" (UID: "2fe1f0d9-6c77-4dab-b894-40c419c10324"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.678065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp" (OuterVolumeSpecName: "kube-api-access-tlclp") pod "2fe1f0d9-6c77-4dab-b894-40c419c10324" (UID: "2fe1f0d9-6c77-4dab-b894-40c419c10324"). InnerVolumeSpecName "kube-api-access-tlclp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.732903 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data" (OuterVolumeSpecName: "config-data") pod "2fe1f0d9-6c77-4dab-b894-40c419c10324" (UID: "2fe1f0d9-6c77-4dab-b894-40c419c10324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.735079 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe1f0d9-6c77-4dab-b894-40c419c10324" (UID: "2fe1f0d9-6c77-4dab-b894-40c419c10324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.769487 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlclp\" (UniqueName: \"kubernetes.io/projected/2fe1f0d9-6c77-4dab-b894-40c419c10324-kube-api-access-tlclp\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.770715 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.770738 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:37 crc kubenswrapper[4856]: I1203 09:33:37.770754 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1f0d9-6c77-4dab-b894-40c419c10324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.290781 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tz5lq" event={"ID":"2fe1f0d9-6c77-4dab-b894-40c419c10324","Type":"ContainerDied","Data":"ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6"} Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.291640 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea211b3f7a2ddebaa3daa93ed227f158d4d21d9365c48e09dbba19e1a17254d6" Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.290831 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tz5lq" Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.298088 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.501967 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.502298 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-log" containerID="cri-o://67b377f19e77fa9ef4238737d426f20fa9a4793c9a5f9fbb5fc5abc0bd88c526" gracePeriod=30 Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.502514 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-api" containerID="cri-o://d83f8dda33e53f052e379cdd2752468340b43c9cb312c60f1fca03a57d141515" gracePeriod=30 Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.517020 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.517256 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerName="nova-scheduler-scheduler" containerID="cri-o://5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" gracePeriod=30 Dec 03 09:33:38 crc kubenswrapper[4856]: I1203 09:33:38.562800 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:39 crc kubenswrapper[4856]: I1203 09:33:39.304984 4856 generic.go:334] "Generic (PLEG): container finished" podID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerID="67b377f19e77fa9ef4238737d426f20fa9a4793c9a5f9fbb5fc5abc0bd88c526" exitCode=143 Dec 03 09:33:39 crc kubenswrapper[4856]: I1203 09:33:39.305034 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerDied","Data":"67b377f19e77fa9ef4238737d426f20fa9a4793c9a5f9fbb5fc5abc0bd88c526"} Dec 03 09:33:40 crc kubenswrapper[4856]: I1203 09:33:40.315113 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" containerID="cri-o://56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061" gracePeriod=30 Dec 03 09:33:40 crc kubenswrapper[4856]: I1203 09:33:40.315161 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" containerID="cri-o://0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7" gracePeriod=30 Dec 03 09:33:41 crc kubenswrapper[4856]: I1203 09:33:41.329365 4856 generic.go:334] "Generic (PLEG): container finished" podID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerID="56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061" exitCode=143 Dec 03 09:33:41 crc kubenswrapper[4856]: I1203 09:33:41.329507 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerDied","Data":"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061"} Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.144288 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b is running failed: container process not found" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.145834 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b is running failed: container process not found" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.146194 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b is running failed: container process not found" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.146229 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerName="nova-scheduler-scheduler" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.209111 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.398861 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62z6w\" (UniqueName: \"kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w\") pod \"b0b22308-083d-4f14-a30b-a5d02357cdaf\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.398987 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle\") pod \"b0b22308-083d-4f14-a30b-a5d02357cdaf\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.402218 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data\") pod \"b0b22308-083d-4f14-a30b-a5d02357cdaf\" (UID: \"b0b22308-083d-4f14-a30b-a5d02357cdaf\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.413129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w" (OuterVolumeSpecName: "kube-api-access-62z6w") pod "b0b22308-083d-4f14-a30b-a5d02357cdaf" (UID: "b0b22308-083d-4f14-a30b-a5d02357cdaf"). InnerVolumeSpecName "kube-api-access-62z6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.413426 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.413309 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0b22308-083d-4f14-a30b-a5d02357cdaf","Type":"ContainerDied","Data":"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b"} Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.413760 4856 scope.go:117] "RemoveContainer" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.413267 4856 generic.go:334] "Generic (PLEG): container finished" podID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" exitCode=0 Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.414333 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0b22308-083d-4f14-a30b-a5d02357cdaf","Type":"ContainerDied","Data":"dbe8b147c4c206ac1331d2e28801e598c9cd4b73b31086684794efeaada8a572"} Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.424531 4856 generic.go:334] "Generic (PLEG): container finished" podID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerID="d83f8dda33e53f052e379cdd2752468340b43c9cb312c60f1fca03a57d141515" exitCode=0 Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.424685 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerDied","Data":"d83f8dda33e53f052e379cdd2752468340b43c9cb312c60f1fca03a57d141515"} Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.449801 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0b22308-083d-4f14-a30b-a5d02357cdaf" (UID: "b0b22308-083d-4f14-a30b-a5d02357cdaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.450751 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.463958 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:38736->10.217.0.197:8775: read: connection reset by peer" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.464125 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:38744->10.217.0.197:8775: read: connection reset by peer" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.472293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data" (OuterVolumeSpecName: "config-data") pod "b0b22308-083d-4f14-a30b-a5d02357cdaf" (UID: "b0b22308-083d-4f14-a30b-a5d02357cdaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.536953 4856 scope.go:117] "RemoveContainer" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.538891 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b\": container with ID starting with 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b not found: ID does not exist" containerID="5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.538937 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b"} err="failed to get container status \"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b\": rpc error: code = NotFound desc = could not find container \"5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b\": container with ID starting with 5c48889d34d955bda5950ddae016ae7c481bbfb77680e32747db69f13fbc849b not found: ID does not exist" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540319 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540535 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540696 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540769 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.540876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxw5\" (UniqueName: \"kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5\") pod \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\" (UID: \"4ffd08f9-4718-4f8e-b97b-052231b9f8c2\") " Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.542165 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs" (OuterVolumeSpecName: "logs") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.549128 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.549186 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62z6w\" (UniqueName: \"kubernetes.io/projected/b0b22308-083d-4f14-a30b-a5d02357cdaf-kube-api-access-62z6w\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.549205 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.549222 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b22308-083d-4f14-a30b-a5d02357cdaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.553719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5" (OuterVolumeSpecName: "kube-api-access-7kxw5") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "kube-api-access-7kxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.579064 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.579592 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data" (OuterVolumeSpecName: "config-data") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.604692 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.613152 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ffd08f9-4718-4f8e-b97b-052231b9f8c2" (UID: "4ffd08f9-4718-4f8e-b97b-052231b9f8c2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.652300 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.652394 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.652434 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxw5\" (UniqueName: \"kubernetes.io/projected/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-kube-api-access-7kxw5\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.652452 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.652464 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd08f9-4718-4f8e-b97b-052231b9f8c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.758310 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.803561 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.826513 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827288 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1f0d9-6c77-4dab-b894-40c419c10324" containerName="nova-manage" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827313 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1f0d9-6c77-4dab-b894-40c419c10324" containerName="nova-manage" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827329 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerName="nova-scheduler-scheduler" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827344 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerName="nova-scheduler-scheduler" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827360 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-log" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827367 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-log" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827394 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="init" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827401 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="init" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827415 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-api" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827422 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-api" Dec 03 09:33:43 crc kubenswrapper[4856]: E1203 09:33:43.827434 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="dnsmasq-dns" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827441 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="dnsmasq-dns" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827701 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-api" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827712 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" containerName="nova-scheduler-scheduler" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827727 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0495157c-a5b7-4e1e-ac5a-67b5f9b9dd4c" containerName="dnsmasq-dns" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827739 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" containerName="nova-api-log" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.827748 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1f0d9-6c77-4dab-b894-40c419c10324" containerName="nova-manage" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.828781 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.836697 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.837709 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.857105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6tf\" (UniqueName: \"kubernetes.io/projected/b253f904-482d-4e19-b899-0304f9382759-kube-api-access-5m6tf\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.857179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.857279 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-config-data\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.958278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6tf\" (UniqueName: \"kubernetes.io/projected/b253f904-482d-4e19-b899-0304f9382759-kube-api-access-5m6tf\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.958334 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.958455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-config-data\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.962585 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-config-data\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.964439 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b253f904-482d-4e19-b899-0304f9382759-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:43 crc kubenswrapper[4856]: I1203 09:33:43.974708 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6tf\" (UniqueName: \"kubernetes.io/projected/b253f904-482d-4e19-b899-0304f9382759-kube-api-access-5m6tf\") pod \"nova-scheduler-0\" (UID: \"b253f904-482d-4e19-b899-0304f9382759\") " pod="openstack/nova-scheduler-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.072064 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.167503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data\") pod \"316bf52a-d128-4d77-944e-fbd3107af8a2\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.169020 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs\") pod \"316bf52a-d128-4d77-944e-fbd3107af8a2\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.169115 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle\") pod \"316bf52a-d128-4d77-944e-fbd3107af8a2\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.169150 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmxp\" (UniqueName: \"kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp\") pod \"316bf52a-d128-4d77-944e-fbd3107af8a2\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.169181 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs\") pod \"316bf52a-d128-4d77-944e-fbd3107af8a2\" (UID: \"316bf52a-d128-4d77-944e-fbd3107af8a2\") " Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.170086 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs" (OuterVolumeSpecName: "logs") pod "316bf52a-d128-4d77-944e-fbd3107af8a2" (UID: "316bf52a-d128-4d77-944e-fbd3107af8a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.174534 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp" (OuterVolumeSpecName: "kube-api-access-fxmxp") pod "316bf52a-d128-4d77-944e-fbd3107af8a2" (UID: "316bf52a-d128-4d77-944e-fbd3107af8a2"). InnerVolumeSpecName "kube-api-access-fxmxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.185166 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.204957 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316bf52a-d128-4d77-944e-fbd3107af8a2" (UID: "316bf52a-d128-4d77-944e-fbd3107af8a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.223454 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data" (OuterVolumeSpecName: "config-data") pod "316bf52a-d128-4d77-944e-fbd3107af8a2" (UID: "316bf52a-d128-4d77-944e-fbd3107af8a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.234550 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "316bf52a-d128-4d77-944e-fbd3107af8a2" (UID: "316bf52a-d128-4d77-944e-fbd3107af8a2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.273448 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.273510 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.273521 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmxp\" (UniqueName: \"kubernetes.io/projected/316bf52a-d128-4d77-944e-fbd3107af8a2-kube-api-access-fxmxp\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.273533 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316bf52a-d128-4d77-944e-fbd3107af8a2-logs\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.273545 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316bf52a-d128-4d77-944e-fbd3107af8a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.442127 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4ffd08f9-4718-4f8e-b97b-052231b9f8c2","Type":"ContainerDied","Data":"5f173ffd282c004f5cea724d3b07967a3c7024134310385063cd2c953c270c32"} Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.442185 4856 scope.go:117] "RemoveContainer" containerID="d83f8dda33e53f052e379cdd2752468340b43c9cb312c60f1fca03a57d141515" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.442135 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.444465 4856 generic.go:334] "Generic (PLEG): container finished" podID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerID="0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7" exitCode=0 Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.444506 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.444523 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerDied","Data":"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7"} Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.444548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"316bf52a-d128-4d77-944e-fbd3107af8a2","Type":"ContainerDied","Data":"d01d3c9ca64c30285825ff0d801f24f28d7973c54f2dabc7f0292f6b68efd0ea"} Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.477858 4856 scope.go:117] "RemoveContainer" containerID="67b377f19e77fa9ef4238737d426f20fa9a4793c9a5f9fbb5fc5abc0bd88c526" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.498991 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.512885 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.522779 4856 scope.go:117] "RemoveContainer" containerID="0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.524168 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.536194 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.544756 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: E1203 09:33:44.545307 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.545329 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" Dec 03 09:33:44 crc kubenswrapper[4856]: E1203 09:33:44.545354 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.545360 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.545544 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-metadata" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.545572 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" containerName="nova-metadata-log" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.546751 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.551321 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.551495 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.559027 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.561639 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.563905 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.564109 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.564267 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579150 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-config-data\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579218 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579244 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7384e37c-9204-4c80-9119-3c5454f32c80-logs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579267 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-config-data\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579283 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579331 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579381 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579429 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnb5\" (UniqueName: \"kubernetes.io/projected/7384e37c-9204-4c80-9119-3c5454f32c80-kube-api-access-6nnb5\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579459 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d29f7b-f4ed-4266-8713-a7252ca355fe-logs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.579479 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mftm\" (UniqueName: \"kubernetes.io/projected/c1d29f7b-f4ed-4266-8713-a7252ca355fe-kube-api-access-9mftm\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.580940 4856 scope.go:117] "RemoveContainer" containerID="56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.591365 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.608384 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.626286 4856 scope.go:117] "RemoveContainer" containerID="0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7" Dec 03 09:33:44 crc kubenswrapper[4856]: E1203 09:33:44.627070 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7\": container with ID starting with 0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7 not found: ID does not exist" containerID="0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.627153 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7"} err="failed to get container status \"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7\": rpc error: code = NotFound desc = could not find container \"0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7\": container with ID starting with 0f0f0929ae1839a70aad570820ed1ed720607f200b3dccb3228e961803356fd7 not found: ID does not exist" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.627194 4856 scope.go:117] "RemoveContainer" containerID="56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061" Dec 03 09:33:44 crc kubenswrapper[4856]: E1203 09:33:44.627775 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061\": container with ID starting with 56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061 not found: ID does not exist" containerID="56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.628421 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061"} err="failed to get container status \"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061\": rpc error: code = NotFound desc = could not find container \"56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061\": container with ID starting with 56a9c5b79a9464980eb84d458bfa105019185b14a0a8351a4e7b2b21f73c5061 not found: ID does not exist" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.683827 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-config-data\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.683960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684013 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7384e37c-9204-4c80-9119-3c5454f32c80-logs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684057 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-config-data\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684197 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684348 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684401 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnb5\" (UniqueName: \"kubernetes.io/projected/7384e37c-9204-4c80-9119-3c5454f32c80-kube-api-access-6nnb5\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d29f7b-f4ed-4266-8713-a7252ca355fe-logs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.684494 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mftm\" (UniqueName: \"kubernetes.io/projected/c1d29f7b-f4ed-4266-8713-a7252ca355fe-kube-api-access-9mftm\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.685423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d29f7b-f4ed-4266-8713-a7252ca355fe-logs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.686759 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7384e37c-9204-4c80-9119-3c5454f32c80-logs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: W1203 09:33:44.694964 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb253f904_482d_4e19_b899_0304f9382759.slice/crio-69ca8adcd66f9db26b53b4bbb68842b6ce7c9b8d20777ecd9c1b453703ece9a6 WatchSource:0}: Error finding container 69ca8adcd66f9db26b53b4bbb68842b6ce7c9b8d20777ecd9c1b453703ece9a6: Status 404 returned error can't find the container with id 69ca8adcd66f9db26b53b4bbb68842b6ce7c9b8d20777ecd9c1b453703ece9a6 Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.696917 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.696992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.697173 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.698497 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.699216 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.699965 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316bf52a-d128-4d77-944e-fbd3107af8a2" path="/var/lib/kubelet/pods/316bf52a-d128-4d77-944e-fbd3107af8a2/volumes" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.700687 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffd08f9-4718-4f8e-b97b-052231b9f8c2" path="/var/lib/kubelet/pods/4ffd08f9-4718-4f8e-b97b-052231b9f8c2/volumes" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.701335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d29f7b-f4ed-4266-8713-a7252ca355fe-config-data\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.701391 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b22308-083d-4f14-a30b-a5d02357cdaf" path="/var/lib/kubelet/pods/b0b22308-083d-4f14-a30b-a5d02357cdaf/volumes" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.702525 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.704913 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7384e37c-9204-4c80-9119-3c5454f32c80-config-data\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.705560 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnb5\" (UniqueName: \"kubernetes.io/projected/7384e37c-9204-4c80-9119-3c5454f32c80-kube-api-access-6nnb5\") pod \"nova-metadata-0\" (UID: \"7384e37c-9204-4c80-9119-3c5454f32c80\") " pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.709247 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mftm\" (UniqueName: \"kubernetes.io/projected/c1d29f7b-f4ed-4266-8713-a7252ca355fe-kube-api-access-9mftm\") pod \"nova-api-0\" (UID: \"c1d29f7b-f4ed-4266-8713-a7252ca355fe\") " pod="openstack/nova-api-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.885236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 09:33:44 crc kubenswrapper[4856]: I1203 09:33:44.898849 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.418037 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 09:33:45 crc kubenswrapper[4856]: W1203 09:33:45.421031 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7384e37c_9204_4c80_9119_3c5454f32c80.slice/crio-870e00578541d9dfd359435b1bbcda8191cc267cb7bedba46c151237dadfcbd7 WatchSource:0}: Error finding container 870e00578541d9dfd359435b1bbcda8191cc267cb7bedba46c151237dadfcbd7: Status 404 returned error can't find the container with id 870e00578541d9dfd359435b1bbcda8191cc267cb7bedba46c151237dadfcbd7 Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.466672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b253f904-482d-4e19-b899-0304f9382759","Type":"ContainerStarted","Data":"6016658f2e664a728c49f1a242210eeb68b1a48bec7c7c503097b18539104d54"} Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.467046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b253f904-482d-4e19-b899-0304f9382759","Type":"ContainerStarted","Data":"69ca8adcd66f9db26b53b4bbb68842b6ce7c9b8d20777ecd9c1b453703ece9a6"} Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.468661 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7384e37c-9204-4c80-9119-3c5454f32c80","Type":"ContainerStarted","Data":"870e00578541d9dfd359435b1bbcda8191cc267cb7bedba46c151237dadfcbd7"} Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.517829 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 09:33:45 crc kubenswrapper[4856]: I1203 09:33:45.523423 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.523384806 podStartE2EDuration="2.523384806s" podCreationTimestamp="2025-12-03 09:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:45.501049084 +0000 UTC m=+1293.683941385" watchObservedRunningTime="2025-12-03 09:33:45.523384806 +0000 UTC m=+1293.706277107" Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.490050 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7384e37c-9204-4c80-9119-3c5454f32c80","Type":"ContainerStarted","Data":"8909be2c4f1d383a727ddc3d7f44fcc9b0c1b539e70cb9aa928320cfb5287f03"} Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.490428 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7384e37c-9204-4c80-9119-3c5454f32c80","Type":"ContainerStarted","Data":"4a32171f98a7265104d482c353e78e801795fc958d3d86d40d1a8e8f02d41e65"} Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.492642 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1d29f7b-f4ed-4266-8713-a7252ca355fe","Type":"ContainerStarted","Data":"34ae31e8f381c5712856be58f4769b3415254d3c1ceb28c9e5f659fd8ab4c2f7"} Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.492720 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1d29f7b-f4ed-4266-8713-a7252ca355fe","Type":"ContainerStarted","Data":"9149786e5560a9360a327f6e8341320c9b7adce0c961fed1a1bdd831d17732be"} Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.492743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1d29f7b-f4ed-4266-8713-a7252ca355fe","Type":"ContainerStarted","Data":"b5d671e07959ffa1535a9674043b90e0fa8574faace3e94316630de57ac6329b"} Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.522461 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.522438573 podStartE2EDuration="2.522438573s" podCreationTimestamp="2025-12-03 09:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:46.513774624 +0000 UTC m=+1294.696666935" watchObservedRunningTime="2025-12-03 09:33:46.522438573 +0000 UTC m=+1294.705330864" Dec 03 09:33:46 crc kubenswrapper[4856]: I1203 09:33:46.550775 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.550747265 podStartE2EDuration="2.550747265s" podCreationTimestamp="2025-12-03 09:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:33:46.545600816 +0000 UTC m=+1294.728493117" watchObservedRunningTime="2025-12-03 09:33:46.550747265 +0000 UTC m=+1294.733639566" Dec 03 09:33:49 crc kubenswrapper[4856]: I1203 09:33:49.185553 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 09:33:49 crc kubenswrapper[4856]: I1203 09:33:49.886220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:33:49 crc kubenswrapper[4856]: I1203 09:33:49.886270 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 09:33:52 crc kubenswrapper[4856]: I1203 09:33:52.759550 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:33:52 crc kubenswrapper[4856]: I1203 09:33:52.760186 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:33:52 crc kubenswrapper[4856]: I1203 09:33:52.760270 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:33:52 crc kubenswrapper[4856]: I1203 09:33:52.761493 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:33:52 crc kubenswrapper[4856]: I1203 09:33:52.761564 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7" gracePeriod=600 Dec 03 09:33:53 crc kubenswrapper[4856]: I1203 09:33:53.574543 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7" exitCode=0 Dec 03 09:33:53 crc kubenswrapper[4856]: I1203 09:33:53.574626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7"} Dec 03 09:33:53 crc kubenswrapper[4856]: I1203 09:33:53.576207 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527"} Dec 03 09:33:53 crc kubenswrapper[4856]: I1203 09:33:53.576256 4856 scope.go:117] "RemoveContainer" containerID="c7436d114857ce6a80c004a493e358196edfa3481c485bfceac4068566215e92" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.185920 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.262191 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.452952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.628333 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.886159 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.886370 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.899446 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:54 crc kubenswrapper[4856]: I1203 09:33:54.899541 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 09:33:55 crc kubenswrapper[4856]: I1203 09:33:55.905434 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7384e37c-9204-4c80-9119-3c5454f32c80" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:55 crc kubenswrapper[4856]: I1203 09:33:55.905500 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7384e37c-9204-4c80-9119-3c5454f32c80" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:55 crc kubenswrapper[4856]: I1203 09:33:55.919077 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1d29f7b-f4ed-4266-8713-a7252ca355fe" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:33:55 crc kubenswrapper[4856]: I1203 09:33:55.919100 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1d29f7b-f4ed-4266-8713-a7252ca355fe" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.894019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.895615 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.905215 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.907901 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.908327 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.910756 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 09:34:04 crc kubenswrapper[4856]: I1203 09:34:04.914170 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 09:34:05 crc kubenswrapper[4856]: I1203 09:34:05.716968 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 09:34:05 crc kubenswrapper[4856]: I1203 09:34:05.722476 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 09:34:05 crc kubenswrapper[4856]: I1203 09:34:05.727581 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 09:34:14 crc kubenswrapper[4856]: I1203 09:34:14.256436 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:15 crc kubenswrapper[4856]: I1203 09:34:15.360111 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:18 crc kubenswrapper[4856]: I1203 09:34:18.973046 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="rabbitmq" containerID="cri-o://83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118" gracePeriod=604796 Dec 03 09:34:19 crc kubenswrapper[4856]: I1203 09:34:19.204533 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Dec 03 09:34:20 crc kubenswrapper[4856]: I1203 09:34:20.216290 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="rabbitmq" containerID="cri-o://b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2" gracePeriod=604796 Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.578584 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622116 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622216 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622343 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ckt2\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622380 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622439 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622583 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622620 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622760 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.622969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.623183 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf\") pod \"16e71e20-1329-46fc-b544-39febc69ae60\" (UID: \"16e71e20-1329-46fc-b544-39febc69ae60\") " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.626607 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.627156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.627388 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.631032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.638381 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.641645 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info" (OuterVolumeSpecName: "pod-info") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.641717 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.642745 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2" (OuterVolumeSpecName: "kube-api-access-8ckt2") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "kube-api-access-8ckt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.686889 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf" (OuterVolumeSpecName: "server-conf") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.717847 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data" (OuterVolumeSpecName: "config-data") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.726343 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.726857 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727051 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727140 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/16e71e20-1329-46fc-b544-39febc69ae60-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727214 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727276 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ckt2\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-kube-api-access-8ckt2\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727338 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/16e71e20-1329-46fc-b544-39febc69ae60-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727412 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727507 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.727578 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16e71e20-1329-46fc-b544-39febc69ae60-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.765589 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.787120 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "16e71e20-1329-46fc-b544-39febc69ae60" (UID: "16e71e20-1329-46fc-b544-39febc69ae60"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.829559 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/16e71e20-1329-46fc-b544-39febc69ae60-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.829595 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.934632 4856 generic.go:334] "Generic (PLEG): container finished" podID="16e71e20-1329-46fc-b544-39febc69ae60" containerID="83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118" exitCode=0 Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.934685 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerDied","Data":"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118"} Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.934718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"16e71e20-1329-46fc-b544-39febc69ae60","Type":"ContainerDied","Data":"febdf524de1bec8493dbcb0162ff75943cef72486f8546ae4f8b63e500b5d9c5"} Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.934724 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.934740 4856 scope.go:117] "RemoveContainer" containerID="83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.961623 4856 scope.go:117] "RemoveContainer" containerID="e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132" Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.990250 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:25 crc kubenswrapper[4856]: I1203 09:34:25.999890 4856 scope.go:117] "RemoveContainer" containerID="83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118" Dec 03 09:34:26 crc kubenswrapper[4856]: E1203 09:34:26.000362 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118\": container with ID starting with 83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118 not found: ID does not exist" containerID="83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.000408 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118"} err="failed to get container status \"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118\": rpc error: code = NotFound desc = could not find container \"83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118\": container with ID starting with 83df2da7f2a06d2c6c2f0258af1a40f1ff81c08ff22e3494422620b5fc000118 not found: ID does not exist" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.000436 4856 scope.go:117] "RemoveContainer" containerID="e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132" Dec 03 09:34:26 crc kubenswrapper[4856]: E1203 09:34:26.000880 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132\": container with ID starting with e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132 not found: ID does not exist" containerID="e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.000958 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132"} err="failed to get container status \"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132\": rpc error: code = NotFound desc = could not find container \"e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132\": container with ID starting with e0e68a3f5714fe7880e1bfceeb2b9ebd2b579d83a32918859ba872c45e9b2132 not found: ID does not exist" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.002282 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.017784 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:26 crc kubenswrapper[4856]: E1203 09:34:26.018357 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="setup-container" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.018383 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="setup-container" Dec 03 09:34:26 crc kubenswrapper[4856]: E1203 09:34:26.018427 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="rabbitmq" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.018438 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="rabbitmq" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.018689 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e71e20-1329-46fc-b544-39febc69ae60" containerName="rabbitmq" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.019991 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.022647 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.022654 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.023040 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.023238 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-f58dm" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.025028 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.025034 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.025069 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.033070 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136306 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136454 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136481 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec2efa-e052-4325-bd97-cbd806f725fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjb5\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-kube-api-access-qfjb5\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.136785 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.137345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.137516 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.137633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.137931 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.138036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec2efa-e052-4325-bd97-cbd806f725fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240476 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240532 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec2efa-e052-4325-bd97-cbd806f725fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240698 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240829 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240863 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec2efa-e052-4325-bd97-cbd806f725fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240897 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfjb5\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-kube-api-access-qfjb5\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.240931 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.241189 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.241218 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.241411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.241800 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.241912 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.242913 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/faec2efa-e052-4325-bd97-cbd806f725fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.246754 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/faec2efa-e052-4325-bd97-cbd806f725fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.247290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.247770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.248078 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/faec2efa-e052-4325-bd97-cbd806f725fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.262874 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfjb5\" (UniqueName: \"kubernetes.io/projected/faec2efa-e052-4325-bd97-cbd806f725fa-kube-api-access-qfjb5\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.292616 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"faec2efa-e052-4325-bd97-cbd806f725fa\") " pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.342236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.713124 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e71e20-1329-46fc-b544-39febc69ae60" path="/var/lib/kubelet/pods/16e71e20-1329-46fc-b544-39febc69ae60/volumes" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.851281 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.856716 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978340 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978515 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978554 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978613 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978734 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.978880 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.998894 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-689sv\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.979569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.980131 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.983092 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.983999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.986676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info" (OuterVolumeSpecName: "pod-info") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 09:34:26 crc kubenswrapper[4856]: I1203 09:34:26.987202 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:26.999210 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls\") pod \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\" (UID: \"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb\") " Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.001476 4856 generic.go:334] "Generic (PLEG): container finished" podID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerID="b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2" exitCode=0 Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.001622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerDied","Data":"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2"} Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.001670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86bc6e23-9abf-4b9e-97bd-2f8e29a294bb","Type":"ContainerDied","Data":"d547d93f22e80fe193efe3630484425cea8cd39d2e382b91231e59472f4a6304"} Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.001698 4856 scope.go:117] "RemoveContainer" containerID="b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002575 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002612 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002625 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002637 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002650 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002662 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.002613 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.017587 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data" (OuterVolumeSpecName: "config-data") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.017735 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.018717 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"faec2efa-e052-4325-bd97-cbd806f725fa","Type":"ContainerStarted","Data":"c97d2fdeafe1dcea879e6506ea84655116ce8a6126e5937495fd7f4b5a3e36db"} Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.027338 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv" (OuterVolumeSpecName: "kube-api-access-689sv") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "kube-api-access-689sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.065295 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf" (OuterVolumeSpecName: "server-conf") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.068786 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.077641 4856 scope.go:117] "RemoveContainer" containerID="0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.106200 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.106256 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-689sv\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-kube-api-access-689sv\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.106270 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.106284 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.106296 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.128635 4856 scope.go:117] "RemoveContainer" containerID="b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2" Dec 03 09:34:27 crc kubenswrapper[4856]: E1203 09:34:27.129652 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2\": container with ID starting with b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2 not found: ID does not exist" containerID="b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.129956 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2"} err="failed to get container status \"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2\": rpc error: code = NotFound desc = could not find container \"b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2\": container with ID starting with b693fc474572d3a1de4d989daf11caf243b9d52831425cf14abb009158cee2d2 not found: ID does not exist" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.130077 4856 scope.go:117] "RemoveContainer" containerID="0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3" Dec 03 09:34:27 crc kubenswrapper[4856]: E1203 09:34:27.130795 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3\": container with ID starting with 0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3 not found: ID does not exist" containerID="0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.130874 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3"} err="failed to get container status \"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3\": rpc error: code = NotFound desc = could not find container \"0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3\": container with ID starting with 0d18af4c77bffe5dba4bab8327be842c5736e57b25fe08d1109c185ab5b5ccf3 not found: ID does not exist" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.131496 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" (UID: "86bc6e23-9abf-4b9e-97bd-2f8e29a294bb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.208596 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.387987 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.396516 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.422511 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:27 crc kubenswrapper[4856]: E1203 09:34:27.423284 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="rabbitmq" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.423313 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="rabbitmq" Dec 03 09:34:27 crc kubenswrapper[4856]: E1203 09:34:27.423367 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="setup-container" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.423377 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="setup-container" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.423613 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" containerName="rabbitmq" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.425242 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.431418 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.431494 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.431547 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.431706 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.431758 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.432539 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.432871 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lvkpt" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.447013 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.527332 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqdp\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-kube-api-access-8rqdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.527396 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8351e71a-ffb6-4596-8edb-05855ea7c503-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.527694 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8351e71a-ffb6-4596-8edb-05855ea7c503-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.527777 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.527912 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528065 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528230 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.528378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8351e71a-ffb6-4596-8edb-05855ea7c503-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631293 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631369 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631505 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631620 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rqdp\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-kube-api-access-8rqdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8351e71a-ffb6-4596-8edb-05855ea7c503-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.631987 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.632159 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.632275 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.632688 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.632895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.633454 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8351e71a-ffb6-4596-8edb-05855ea7c503-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.637318 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8351e71a-ffb6-4596-8edb-05855ea7c503-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.637346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.637726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.638777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8351e71a-ffb6-4596-8edb-05855ea7c503-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.653530 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rqdp\" (UniqueName: \"kubernetes.io/projected/8351e71a-ffb6-4596-8edb-05855ea7c503-kube-api-access-8rqdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.689038 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8351e71a-ffb6-4596-8edb-05855ea7c503\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.753529 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.849110 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.851687 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.856281 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.906720 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.941942 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942323 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942451 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942610 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942720 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpw8t\" (UniqueName: \"kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942793 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:27 crc kubenswrapper[4856]: I1203 09:34:27.942891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045026 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpw8t\" (UniqueName: \"kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045099 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045145 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045185 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045243 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045319 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.045386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.046682 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.048063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.048761 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.049445 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.051196 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.051929 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.101456 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpw8t\" (UniqueName: \"kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t\") pod \"dnsmasq-dns-d558885bc-wsh8h\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.328475 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.499725 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 09:34:28 crc kubenswrapper[4856]: W1203 09:34:28.610643 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8351e71a_ffb6_4596_8edb_05855ea7c503.slice/crio-54255a89edcd929e8cb667091b8f22d09aa8565f14743ea4f61ab1b68b4a85fa WatchSource:0}: Error finding container 54255a89edcd929e8cb667091b8f22d09aa8565f14743ea4f61ab1b68b4a85fa: Status 404 returned error can't find the container with id 54255a89edcd929e8cb667091b8f22d09aa8565f14743ea4f61ab1b68b4a85fa Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.702928 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bc6e23-9abf-4b9e-97bd-2f8e29a294bb" path="/var/lib/kubelet/pods/86bc6e23-9abf-4b9e-97bd-2f8e29a294bb/volumes" Dec 03 09:34:28 crc kubenswrapper[4856]: I1203 09:34:28.811414 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:28 crc kubenswrapper[4856]: W1203 09:34:28.826562 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb03ec1f_2e29_4ddd_94be_8d566ffde469.slice/crio-8f3292faf55e14a4f761e0af1346e3e90f3cdcc9e0df680e8522ae812cc7fd88 WatchSource:0}: Error finding container 8f3292faf55e14a4f761e0af1346e3e90f3cdcc9e0df680e8522ae812cc7fd88: Status 404 returned error can't find the container with id 8f3292faf55e14a4f761e0af1346e3e90f3cdcc9e0df680e8522ae812cc7fd88 Dec 03 09:34:29 crc kubenswrapper[4856]: I1203 09:34:29.102399 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8351e71a-ffb6-4596-8edb-05855ea7c503","Type":"ContainerStarted","Data":"54255a89edcd929e8cb667091b8f22d09aa8565f14743ea4f61ab1b68b4a85fa"} Dec 03 09:34:29 crc kubenswrapper[4856]: I1203 09:34:29.106382 4856 generic.go:334] "Generic (PLEG): container finished" podID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerID="75f5c0b183442fe414413467969cca3e79969606c7f292f5b8b329834953cf6e" exitCode=0 Dec 03 09:34:29 crc kubenswrapper[4856]: I1203 09:34:29.107458 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" event={"ID":"cb03ec1f-2e29-4ddd-94be-8d566ffde469","Type":"ContainerDied","Data":"75f5c0b183442fe414413467969cca3e79969606c7f292f5b8b329834953cf6e"} Dec 03 09:34:29 crc kubenswrapper[4856]: I1203 09:34:29.107542 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" event={"ID":"cb03ec1f-2e29-4ddd-94be-8d566ffde469","Type":"ContainerStarted","Data":"8f3292faf55e14a4f761e0af1346e3e90f3cdcc9e0df680e8522ae812cc7fd88"} Dec 03 09:34:29 crc kubenswrapper[4856]: I1203 09:34:29.114995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"faec2efa-e052-4325-bd97-cbd806f725fa","Type":"ContainerStarted","Data":"a5d3edab4f16c065675d87a5d942282108014ea3eb534bac9d87011e801067f5"} Dec 03 09:34:30 crc kubenswrapper[4856]: I1203 09:34:30.128300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" event={"ID":"cb03ec1f-2e29-4ddd-94be-8d566ffde469","Type":"ContainerStarted","Data":"704dd54f817755b579225e6d47ff22c08290a17e11d6dfa9688c4725ea9866a8"} Dec 03 09:34:30 crc kubenswrapper[4856]: I1203 09:34:30.128667 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:30 crc kubenswrapper[4856]: I1203 09:34:30.157640 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" podStartSLOduration=3.157616678 podStartE2EDuration="3.157616678s" podCreationTimestamp="2025-12-03 09:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:34:30.149902753 +0000 UTC m=+1338.332795074" watchObservedRunningTime="2025-12-03 09:34:30.157616678 +0000 UTC m=+1338.340509019" Dec 03 09:34:31 crc kubenswrapper[4856]: I1203 09:34:31.140945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8351e71a-ffb6-4596-8edb-05855ea7c503","Type":"ContainerStarted","Data":"c1c44df5d24a81ed5b548d09937dffd63c163aea9b6f3c5dfcef8cc5962b8f95"} Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.330232 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.414873 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.415281 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="dnsmasq-dns" containerID="cri-o://01cf34104b690591aaffb0eab69330bfb8fe36c8ace22583ef662ebf7aa4fdc0" gracePeriod=10 Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.583828 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-4htld"] Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.585893 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.600263 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-4htld"] Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698336 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698528 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjxk\" (UniqueName: \"kubernetes.io/projected/bcb163df-ea4f-4591-abf6-85b77b974458-kube-api-access-ctjxk\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698725 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698749 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.698770 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-config\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.800843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.800928 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.801006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctjxk\" (UniqueName: \"kubernetes.io/projected/bcb163df-ea4f-4591-abf6-85b77b974458-kube-api-access-ctjxk\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.801040 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.801058 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.801074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-config\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.801127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.802968 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.803029 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-config\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.803660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.804084 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.804357 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.808485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb163df-ea4f-4591-abf6-85b77b974458-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.830680 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctjxk\" (UniqueName: \"kubernetes.io/projected/bcb163df-ea4f-4591-abf6-85b77b974458-kube-api-access-ctjxk\") pod \"dnsmasq-dns-78c64bc9c5-4htld\" (UID: \"bcb163df-ea4f-4591-abf6-85b77b974458\") " pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:38 crc kubenswrapper[4856]: I1203 09:34:38.964399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.233991 4856 generic.go:334] "Generic (PLEG): container finished" podID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerID="01cf34104b690591aaffb0eab69330bfb8fe36c8ace22583ef662ebf7aa4fdc0" exitCode=0 Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.234482 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" event={"ID":"2e0b7933-046b-4eb4-aa43-cc3edf08dba1","Type":"ContainerDied","Data":"01cf34104b690591aaffb0eab69330bfb8fe36c8ace22583ef662ebf7aa4fdc0"} Dec 03 09:34:39 crc kubenswrapper[4856]: W1203 09:34:39.426420 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb163df_ea4f_4591_abf6_85b77b974458.slice/crio-de2df96305571a2d61a63bbcdb277f711f1e77ba8005eb5300a83bf445beff31 WatchSource:0}: Error finding container de2df96305571a2d61a63bbcdb277f711f1e77ba8005eb5300a83bf445beff31: Status 404 returned error can't find the container with id de2df96305571a2d61a63bbcdb277f711f1e77ba8005eb5300a83bf445beff31 Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.429712 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-4htld"] Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.462963 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.620630 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.621122 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.621946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.622074 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.622169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.622355 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg9db\" (UniqueName: \"kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db\") pod \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\" (UID: \"2e0b7933-046b-4eb4-aa43-cc3edf08dba1\") " Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.628663 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db" (OuterVolumeSpecName: "kube-api-access-rg9db") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "kube-api-access-rg9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.680412 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.688043 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.689334 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config" (OuterVolumeSpecName: "config") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.689473 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.702985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e0b7933-046b-4eb4-aa43-cc3edf08dba1" (UID: "2e0b7933-046b-4eb4-aa43-cc3edf08dba1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730656 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730720 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730736 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730779 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg9db\" (UniqueName: \"kubernetes.io/projected/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-kube-api-access-rg9db\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730798 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:39 crc kubenswrapper[4856]: I1203 09:34:39.730829 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0b7933-046b-4eb4-aa43-cc3edf08dba1-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.246691 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" event={"ID":"2e0b7933-046b-4eb4-aa43-cc3edf08dba1","Type":"ContainerDied","Data":"45498da0fefe99ea9b45751cd7d6ef551b34482e24c85f42192492c016c23178"} Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.247223 4856 scope.go:117] "RemoveContainer" containerID="01cf34104b690591aaffb0eab69330bfb8fe36c8ace22583ef662ebf7aa4fdc0" Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.246760 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-p9qvz" Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.249777 4856 generic.go:334] "Generic (PLEG): container finished" podID="bcb163df-ea4f-4591-abf6-85b77b974458" containerID="ff6823abe5b31faaed11f3be3a35c1228776e328ff197baabfca27354bf8db53" exitCode=0 Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.249896 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" event={"ID":"bcb163df-ea4f-4591-abf6-85b77b974458","Type":"ContainerDied","Data":"ff6823abe5b31faaed11f3be3a35c1228776e328ff197baabfca27354bf8db53"} Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.249978 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" event={"ID":"bcb163df-ea4f-4591-abf6-85b77b974458","Type":"ContainerStarted","Data":"de2df96305571a2d61a63bbcdb277f711f1e77ba8005eb5300a83bf445beff31"} Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.341830 4856 scope.go:117] "RemoveContainer" containerID="f39d6c8ba2341e15ac1d20051c8481a247e95f9d9a1ba29c619326cc4dcd2560" Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.347870 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.359384 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-p9qvz"] Dec 03 09:34:40 crc kubenswrapper[4856]: I1203 09:34:40.701571 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" path="/var/lib/kubelet/pods/2e0b7933-046b-4eb4-aa43-cc3edf08dba1/volumes" Dec 03 09:34:41 crc kubenswrapper[4856]: I1203 09:34:41.264919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" event={"ID":"bcb163df-ea4f-4591-abf6-85b77b974458","Type":"ContainerStarted","Data":"3f92fc35783978ac3d816c5beb73d3d1e45730b48ad9950f7615dd83040001cc"} Dec 03 09:34:41 crc kubenswrapper[4856]: I1203 09:34:41.265168 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:41 crc kubenswrapper[4856]: I1203 09:34:41.289262 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" podStartSLOduration=3.28923758 podStartE2EDuration="3.28923758s" podCreationTimestamp="2025-12-03 09:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:34:41.285339652 +0000 UTC m=+1349.468231953" watchObservedRunningTime="2025-12-03 09:34:41.28923758 +0000 UTC m=+1349.472129881" Dec 03 09:34:48 crc kubenswrapper[4856]: I1203 09:34:48.967161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-4htld" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.053362 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.054085 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="dnsmasq-dns" containerID="cri-o://704dd54f817755b579225e6d47ff22c08290a17e11d6dfa9688c4725ea9866a8" gracePeriod=10 Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.362481 4856 generic.go:334] "Generic (PLEG): container finished" podID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerID="704dd54f817755b579225e6d47ff22c08290a17e11d6dfa9688c4725ea9866a8" exitCode=0 Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.362763 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" event={"ID":"cb03ec1f-2e29-4ddd-94be-8d566ffde469","Type":"ContainerDied","Data":"704dd54f817755b579225e6d47ff22c08290a17e11d6dfa9688c4725ea9866a8"} Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.640039 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789519 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789659 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789821 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789858 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.789908 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpw8t\" (UniqueName: \"kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.790030 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc\") pod \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\" (UID: \"cb03ec1f-2e29-4ddd-94be-8d566ffde469\") " Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.806378 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t" (OuterVolumeSpecName: "kube-api-access-cpw8t") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "kube-api-access-cpw8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.850858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config" (OuterVolumeSpecName: "config") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.854040 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.856936 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.860721 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.864224 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.874582 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb03ec1f-2e29-4ddd-94be-8d566ffde469" (UID: "cb03ec1f-2e29-4ddd-94be-8d566ffde469"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893733 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893777 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893793 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893819 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-config\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893835 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893845 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpw8t\" (UniqueName: \"kubernetes.io/projected/cb03ec1f-2e29-4ddd-94be-8d566ffde469-kube-api-access-cpw8t\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:49 crc kubenswrapper[4856]: I1203 09:34:49.893859 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb03ec1f-2e29-4ddd-94be-8d566ffde469-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.372933 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" event={"ID":"cb03ec1f-2e29-4ddd-94be-8d566ffde469","Type":"ContainerDied","Data":"8f3292faf55e14a4f761e0af1346e3e90f3cdcc9e0df680e8522ae812cc7fd88"} Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.372988 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-wsh8h" Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.373000 4856 scope.go:117] "RemoveContainer" containerID="704dd54f817755b579225e6d47ff22c08290a17e11d6dfa9688c4725ea9866a8" Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.407733 4856 scope.go:117] "RemoveContainer" containerID="75f5c0b183442fe414413467969cca3e79969606c7f292f5b8b329834953cf6e" Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.452577 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.462529 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-wsh8h"] Dec 03 09:34:50 crc kubenswrapper[4856]: I1203 09:34:50.702419 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" path="/var/lib/kubelet/pods/cb03ec1f-2e29-4ddd-94be-8d566ffde469/volumes" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.075523 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s"] Dec 03 09:35:02 crc kubenswrapper[4856]: E1203 09:35:02.076517 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="init" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076531 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="init" Dec 03 09:35:02 crc kubenswrapper[4856]: E1203 09:35:02.076546 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076552 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: E1203 09:35:02.076564 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="init" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076571 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="init" Dec 03 09:35:02 crc kubenswrapper[4856]: E1203 09:35:02.076583 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076588 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076768 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0b7933-046b-4eb4-aa43-cc3edf08dba1" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.076781 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb03ec1f-2e29-4ddd-94be-8d566ffde469" containerName="dnsmasq-dns" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.077505 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.082003 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.082071 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.083647 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.084050 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.102298 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s"] Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.102900 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rb9\" (UniqueName: \"kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.103129 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.103227 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.103503 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.206270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8rb9\" (UniqueName: \"kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.206395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.206440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.206513 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.215670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.215944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.223667 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.227129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8rb9\" (UniqueName: \"kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.414228 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.558507 4856 generic.go:334] "Generic (PLEG): container finished" podID="faec2efa-e052-4325-bd97-cbd806f725fa" containerID="a5d3edab4f16c065675d87a5d942282108014ea3eb534bac9d87011e801067f5" exitCode=0 Dec 03 09:35:02 crc kubenswrapper[4856]: I1203 09:35:02.558585 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"faec2efa-e052-4325-bd97-cbd806f725fa","Type":"ContainerDied","Data":"a5d3edab4f16c065675d87a5d942282108014ea3eb534bac9d87011e801067f5"} Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.045783 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s"] Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.570613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" event={"ID":"1fc94321-b8f5-471b-9114-c93f984f9ac7","Type":"ContainerStarted","Data":"f18d6c439f14d211c2b2cf698b7e453601d7d9da561cba80d2a5836f0ad62457"} Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.573740 4856 generic.go:334] "Generic (PLEG): container finished" podID="8351e71a-ffb6-4596-8edb-05855ea7c503" containerID="c1c44df5d24a81ed5b548d09937dffd63c163aea9b6f3c5dfcef8cc5962b8f95" exitCode=0 Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.573849 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8351e71a-ffb6-4596-8edb-05855ea7c503","Type":"ContainerDied","Data":"c1c44df5d24a81ed5b548d09937dffd63c163aea9b6f3c5dfcef8cc5962b8f95"} Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.579985 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"faec2efa-e052-4325-bd97-cbd806f725fa","Type":"ContainerStarted","Data":"0ca461c81414c7842b3b1461bf942fb42c692754f9072fd188a05a8e3ab4c511"} Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.580735 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 09:35:03 crc kubenswrapper[4856]: I1203 09:35:03.654930 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.654868746 podStartE2EDuration="38.654868746s" podCreationTimestamp="2025-12-03 09:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:35:03.643624563 +0000 UTC m=+1371.826516874" watchObservedRunningTime="2025-12-03 09:35:03.654868746 +0000 UTC m=+1371.837761047" Dec 03 09:35:04 crc kubenswrapper[4856]: I1203 09:35:04.596759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8351e71a-ffb6-4596-8edb-05855ea7c503","Type":"ContainerStarted","Data":"18fa8708c33a0f0741f6f3568e4cf2f2db71fd0c3fdf8fd0808e3338e27eb686"} Dec 03 09:35:04 crc kubenswrapper[4856]: I1203 09:35:04.597558 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:35:04 crc kubenswrapper[4856]: I1203 09:35:04.640306 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.640276179 podStartE2EDuration="37.640276179s" podCreationTimestamp="2025-12-03 09:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:35:04.635427107 +0000 UTC m=+1372.818319418" watchObservedRunningTime="2025-12-03 09:35:04.640276179 +0000 UTC m=+1372.823168480" Dec 03 09:35:13 crc kubenswrapper[4856]: I1203 09:35:13.912053 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:35:14 crc kubenswrapper[4856]: I1203 09:35:14.869338 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" event={"ID":"1fc94321-b8f5-471b-9114-c93f984f9ac7","Type":"ContainerStarted","Data":"60632c2d3fbcecf3dedb80622b83e7b713c70147f7b5318625422d718d4dcf44"} Dec 03 09:35:14 crc kubenswrapper[4856]: I1203 09:35:14.894473 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" podStartSLOduration=2.043552693 podStartE2EDuration="12.894432884s" podCreationTimestamp="2025-12-03 09:35:02 +0000 UTC" firstStartedPulling="2025-12-03 09:35:03.057399503 +0000 UTC m=+1371.240291804" lastFinishedPulling="2025-12-03 09:35:13.908279694 +0000 UTC m=+1382.091171995" observedRunningTime="2025-12-03 09:35:14.886819772 +0000 UTC m=+1383.069712073" watchObservedRunningTime="2025-12-03 09:35:14.894432884 +0000 UTC m=+1383.077325195" Dec 03 09:35:16 crc kubenswrapper[4856]: I1203 09:35:16.348165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 09:35:17 crc kubenswrapper[4856]: I1203 09:35:17.758145 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 09:35:28 crc kubenswrapper[4856]: I1203 09:35:28.018958 4856 generic.go:334] "Generic (PLEG): container finished" podID="1fc94321-b8f5-471b-9114-c93f984f9ac7" containerID="60632c2d3fbcecf3dedb80622b83e7b713c70147f7b5318625422d718d4dcf44" exitCode=0 Dec 03 09:35:28 crc kubenswrapper[4856]: I1203 09:35:28.019056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" event={"ID":"1fc94321-b8f5-471b-9114-c93f984f9ac7","Type":"ContainerDied","Data":"60632c2d3fbcecf3dedb80622b83e7b713c70147f7b5318625422d718d4dcf44"} Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.686320 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.758510 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key\") pod \"1fc94321-b8f5-471b-9114-c93f984f9ac7\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.758743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory\") pod \"1fc94321-b8f5-471b-9114-c93f984f9ac7\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.758959 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle\") pod \"1fc94321-b8f5-471b-9114-c93f984f9ac7\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.759106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8rb9\" (UniqueName: \"kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9\") pod \"1fc94321-b8f5-471b-9114-c93f984f9ac7\" (UID: \"1fc94321-b8f5-471b-9114-c93f984f9ac7\") " Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.784057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9" (OuterVolumeSpecName: "kube-api-access-k8rb9") pod "1fc94321-b8f5-471b-9114-c93f984f9ac7" (UID: "1fc94321-b8f5-471b-9114-c93f984f9ac7"). InnerVolumeSpecName "kube-api-access-k8rb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.784332 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1fc94321-b8f5-471b-9114-c93f984f9ac7" (UID: "1fc94321-b8f5-471b-9114-c93f984f9ac7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.797098 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1fc94321-b8f5-471b-9114-c93f984f9ac7" (UID: "1fc94321-b8f5-471b-9114-c93f984f9ac7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.800399 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory" (OuterVolumeSpecName: "inventory") pod "1fc94321-b8f5-471b-9114-c93f984f9ac7" (UID: "1fc94321-b8f5-471b-9114-c93f984f9ac7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.870084 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.870142 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.870157 4856 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc94321-b8f5-471b-9114-c93f984f9ac7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:29 crc kubenswrapper[4856]: I1203 09:35:29.870172 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8rb9\" (UniqueName: \"kubernetes.io/projected/1fc94321-b8f5-471b-9114-c93f984f9ac7-kube-api-access-k8rb9\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.040981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" event={"ID":"1fc94321-b8f5-471b-9114-c93f984f9ac7","Type":"ContainerDied","Data":"f18d6c439f14d211c2b2cf698b7e453601d7d9da561cba80d2a5836f0ad62457"} Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.041050 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18d6c439f14d211c2b2cf698b7e453601d7d9da561cba80d2a5836f0ad62457" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.041067 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.134352 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt"] Dec 03 09:35:30 crc kubenswrapper[4856]: E1203 09:35:30.135043 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc94321-b8f5-471b-9114-c93f984f9ac7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.135066 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc94321-b8f5-471b-9114-c93f984f9ac7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.135337 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc94321-b8f5-471b-9114-c93f984f9ac7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.136411 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.143916 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.143957 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.144072 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.144408 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.147433 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt"] Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.297380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.297461 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.297527 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb76n\" (UniqueName: \"kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.399912 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.399981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.400033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb76n\" (UniqueName: \"kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.404005 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.405676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.429037 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb76n\" (UniqueName: \"kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gbhrt\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:30 crc kubenswrapper[4856]: I1203 09:35:30.473575 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:31 crc kubenswrapper[4856]: I1203 09:35:31.105849 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt"] Dec 03 09:35:32 crc kubenswrapper[4856]: I1203 09:35:32.068705 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" event={"ID":"c0e24ff8-9624-4934-852f-57249413e4ee","Type":"ContainerStarted","Data":"7797afa526e89e999e8308db4249003fb347b3b0c27ecf496f5bbc09195a055c"} Dec 03 09:35:33 crc kubenswrapper[4856]: I1203 09:35:33.082326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" event={"ID":"c0e24ff8-9624-4934-852f-57249413e4ee","Type":"ContainerStarted","Data":"96b631168491d9689c72d8f8682e9daedc2153c45e520ee51682423c742eb26d"} Dec 03 09:35:33 crc kubenswrapper[4856]: I1203 09:35:33.106845 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" podStartSLOduration=1.674199701 podStartE2EDuration="3.106794963s" podCreationTimestamp="2025-12-03 09:35:30 +0000 UTC" firstStartedPulling="2025-12-03 09:35:31.112455867 +0000 UTC m=+1399.295348168" lastFinishedPulling="2025-12-03 09:35:32.545051129 +0000 UTC m=+1400.727943430" observedRunningTime="2025-12-03 09:35:33.103406838 +0000 UTC m=+1401.286299139" watchObservedRunningTime="2025-12-03 09:35:33.106794963 +0000 UTC m=+1401.289687264" Dec 03 09:35:35 crc kubenswrapper[4856]: I1203 09:35:35.714192 4856 scope.go:117] "RemoveContainer" containerID="cf4e9e205f99b7f2208d58d50dd9e2ffde1364c2706d48057c071ab6ba888a0c" Dec 03 09:35:35 crc kubenswrapper[4856]: I1203 09:35:35.753239 4856 scope.go:117] "RemoveContainer" containerID="6b1548cf80841e0b56157c15a406f0fe1c4c7ebcd421205aa805cea855291802" Dec 03 09:35:35 crc kubenswrapper[4856]: I1203 09:35:35.802789 4856 scope.go:117] "RemoveContainer" containerID="462944fed5eac808e33a2e120d4f1d47b3230f419c21c98ad37b706c913ecf6d" Dec 03 09:35:35 crc kubenswrapper[4856]: I1203 09:35:35.828485 4856 scope.go:117] "RemoveContainer" containerID="d5541cbfc308cfb952d707ecd63fb29c334c3cd3e454abe242e563dba2c0de2c" Dec 03 09:35:35 crc kubenswrapper[4856]: I1203 09:35:35.898888 4856 scope.go:117] "RemoveContainer" containerID="8013f351d8f14936b7f5fd5d3439c74ac5f9485b4ad9bb22f17b0dddc30bedfb" Dec 03 09:35:36 crc kubenswrapper[4856]: I1203 09:35:36.122568 4856 generic.go:334] "Generic (PLEG): container finished" podID="c0e24ff8-9624-4934-852f-57249413e4ee" containerID="96b631168491d9689c72d8f8682e9daedc2153c45e520ee51682423c742eb26d" exitCode=0 Dec 03 09:35:36 crc kubenswrapper[4856]: I1203 09:35:36.122653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" event={"ID":"c0e24ff8-9624-4934-852f-57249413e4ee","Type":"ContainerDied","Data":"96b631168491d9689c72d8f8682e9daedc2153c45e520ee51682423c742eb26d"} Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.632589 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.792750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key\") pod \"c0e24ff8-9624-4934-852f-57249413e4ee\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.792838 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb76n\" (UniqueName: \"kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n\") pod \"c0e24ff8-9624-4934-852f-57249413e4ee\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.793236 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory\") pod \"c0e24ff8-9624-4934-852f-57249413e4ee\" (UID: \"c0e24ff8-9624-4934-852f-57249413e4ee\") " Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.803221 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n" (OuterVolumeSpecName: "kube-api-access-wb76n") pod "c0e24ff8-9624-4934-852f-57249413e4ee" (UID: "c0e24ff8-9624-4934-852f-57249413e4ee"). InnerVolumeSpecName "kube-api-access-wb76n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.829912 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c0e24ff8-9624-4934-852f-57249413e4ee" (UID: "c0e24ff8-9624-4934-852f-57249413e4ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.832510 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory" (OuterVolumeSpecName: "inventory") pod "c0e24ff8-9624-4934-852f-57249413e4ee" (UID: "c0e24ff8-9624-4934-852f-57249413e4ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.896951 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.897001 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c0e24ff8-9624-4934-852f-57249413e4ee-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:37 crc kubenswrapper[4856]: I1203 09:35:37.897015 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb76n\" (UniqueName: \"kubernetes.io/projected/c0e24ff8-9624-4934-852f-57249413e4ee-kube-api-access-wb76n\") on node \"crc\" DevicePath \"\"" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.143237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" event={"ID":"c0e24ff8-9624-4934-852f-57249413e4ee","Type":"ContainerDied","Data":"7797afa526e89e999e8308db4249003fb347b3b0c27ecf496f5bbc09195a055c"} Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.143283 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7797afa526e89e999e8308db4249003fb347b3b0c27ecf496f5bbc09195a055c" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.143382 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gbhrt" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.273354 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68"] Dec 03 09:35:38 crc kubenswrapper[4856]: E1203 09:35:38.274099 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e24ff8-9624-4934-852f-57249413e4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.274138 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e24ff8-9624-4934-852f-57249413e4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.274848 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e24ff8-9624-4934-852f-57249413e4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.276006 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.281606 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.281707 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.281791 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.282020 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.290128 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68"] Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.408269 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.408724 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.408775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k98c6\" (UniqueName: \"kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.409004 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.511318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k98c6\" (UniqueName: \"kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.511460 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.511613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.511664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.519485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.520059 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.520353 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.532962 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k98c6\" (UniqueName: \"kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bff68\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:38 crc kubenswrapper[4856]: I1203 09:35:38.606182 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:35:39 crc kubenswrapper[4856]: I1203 09:35:39.385184 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68"] Dec 03 09:35:40 crc kubenswrapper[4856]: I1203 09:35:40.166947 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" event={"ID":"1b94e685-696f-4e31-8296-a234c7767af2","Type":"ContainerStarted","Data":"beb6d3cbf9beef453a21572195c41df287893069daebd00270897629d929a8b3"} Dec 03 09:35:41 crc kubenswrapper[4856]: I1203 09:35:41.181395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" event={"ID":"1b94e685-696f-4e31-8296-a234c7767af2","Type":"ContainerStarted","Data":"31a57e40280909303ebdb4e8e2902c00e7b70409064003848a22070404c9e107"} Dec 03 09:35:41 crc kubenswrapper[4856]: I1203 09:35:41.209004 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" podStartSLOduration=2.596704626 podStartE2EDuration="3.208969723s" podCreationTimestamp="2025-12-03 09:35:38 +0000 UTC" firstStartedPulling="2025-12-03 09:35:39.391852068 +0000 UTC m=+1407.574744369" lastFinishedPulling="2025-12-03 09:35:40.004117165 +0000 UTC m=+1408.187009466" observedRunningTime="2025-12-03 09:35:41.204022968 +0000 UTC m=+1409.386915279" watchObservedRunningTime="2025-12-03 09:35:41.208969723 +0000 UTC m=+1409.391862024" Dec 03 09:36:22 crc kubenswrapper[4856]: I1203 09:36:22.758539 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:36:22 crc kubenswrapper[4856]: I1203 09:36:22.760082 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:36:36 crc kubenswrapper[4856]: I1203 09:36:36.060577 4856 scope.go:117] "RemoveContainer" containerID="db18575320ee58947811506695d259f2d60012e0f2e4bf0a9febe1f1c7c67602" Dec 03 09:36:52 crc kubenswrapper[4856]: I1203 09:36:52.759951 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:36:52 crc kubenswrapper[4856]: I1203 09:36:52.761113 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:37:22 crc kubenswrapper[4856]: I1203 09:37:22.758508 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:37:22 crc kubenswrapper[4856]: I1203 09:37:22.759576 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:37:22 crc kubenswrapper[4856]: I1203 09:37:22.759645 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:37:22 crc kubenswrapper[4856]: I1203 09:37:22.760655 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:37:22 crc kubenswrapper[4856]: I1203 09:37:22.760715 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527" gracePeriod=600 Dec 03 09:37:23 crc kubenswrapper[4856]: I1203 09:37:23.458491 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527" exitCode=0 Dec 03 09:37:23 crc kubenswrapper[4856]: I1203 09:37:23.458644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527"} Dec 03 09:37:23 crc kubenswrapper[4856]: I1203 09:37:23.461032 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149"} Dec 03 09:37:23 crc kubenswrapper[4856]: I1203 09:37:23.461105 4856 scope.go:117] "RemoveContainer" containerID="ed837950b4f1ff5aa3351c07c5f4c690e23cb382dcedbde240c466d1592873b7" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.215917 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.219341 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.227707 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.318251 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.318981 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.319206 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmh94\" (UniqueName: \"kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.421872 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmh94\" (UniqueName: \"kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.422375 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.422679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.423330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.423409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.458089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmh94\" (UniqueName: \"kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94\") pod \"community-operators-rtbsx\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:45 crc kubenswrapper[4856]: I1203 09:38:45.548954 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:46 crc kubenswrapper[4856]: I1203 09:38:46.162611 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:38:46 crc kubenswrapper[4856]: I1203 09:38:46.493192 4856 generic.go:334] "Generic (PLEG): container finished" podID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerID="553b2d7bc16dd6daf85cdd2910366312aedc3f9b88a2423233643d0d1ab69409" exitCode=0 Dec 03 09:38:46 crc kubenswrapper[4856]: I1203 09:38:46.493792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerDied","Data":"553b2d7bc16dd6daf85cdd2910366312aedc3f9b88a2423233643d0d1ab69409"} Dec 03 09:38:46 crc kubenswrapper[4856]: I1203 09:38:46.493848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerStarted","Data":"7c384de7f36d3f764a4c4cdab6876804169896887d794bbc786b91f86d12fb5c"} Dec 03 09:38:46 crc kubenswrapper[4856]: I1203 09:38:46.497512 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:38:48 crc kubenswrapper[4856]: I1203 09:38:48.521429 4856 generic.go:334] "Generic (PLEG): container finished" podID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerID="f39b81c3f972b5170c090fb3893d58ce2a3797a6be9441aa44d82b044f47c132" exitCode=0 Dec 03 09:38:48 crc kubenswrapper[4856]: I1203 09:38:48.521557 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerDied","Data":"f39b81c3f972b5170c090fb3893d58ce2a3797a6be9441aa44d82b044f47c132"} Dec 03 09:38:49 crc kubenswrapper[4856]: I1203 09:38:49.537911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerStarted","Data":"b6d3d2dbe6f0a0d4c8adcde316808d767093fa73ea05ef45de38c726eef04054"} Dec 03 09:38:49 crc kubenswrapper[4856]: I1203 09:38:49.560897 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtbsx" podStartSLOduration=2.10310537 podStartE2EDuration="4.560869382s" podCreationTimestamp="2025-12-03 09:38:45 +0000 UTC" firstStartedPulling="2025-12-03 09:38:46.497176173 +0000 UTC m=+1594.680068474" lastFinishedPulling="2025-12-03 09:38:48.954940195 +0000 UTC m=+1597.137832486" observedRunningTime="2025-12-03 09:38:49.557725241 +0000 UTC m=+1597.740617542" watchObservedRunningTime="2025-12-03 09:38:49.560869382 +0000 UTC m=+1597.743761683" Dec 03 09:38:55 crc kubenswrapper[4856]: I1203 09:38:55.549474 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:55 crc kubenswrapper[4856]: I1203 09:38:55.550040 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:55 crc kubenswrapper[4856]: I1203 09:38:55.608647 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:55 crc kubenswrapper[4856]: I1203 09:38:55.955872 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:56 crc kubenswrapper[4856]: I1203 09:38:56.019977 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:38:57 crc kubenswrapper[4856]: I1203 09:38:57.927472 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rtbsx" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="registry-server" containerID="cri-o://b6d3d2dbe6f0a0d4c8adcde316808d767093fa73ea05ef45de38c726eef04054" gracePeriod=2 Dec 03 09:38:58 crc kubenswrapper[4856]: I1203 09:38:58.940849 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerDied","Data":"b6d3d2dbe6f0a0d4c8adcde316808d767093fa73ea05ef45de38c726eef04054"} Dec 03 09:38:58 crc kubenswrapper[4856]: I1203 09:38:58.940840 4856 generic.go:334] "Generic (PLEG): container finished" podID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerID="b6d3d2dbe6f0a0d4c8adcde316808d767093fa73ea05ef45de38c726eef04054" exitCode=0 Dec 03 09:38:58 crc kubenswrapper[4856]: I1203 09:38:58.941570 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtbsx" event={"ID":"a41191a0-d806-47cc-a09a-e32d9586bb07","Type":"ContainerDied","Data":"7c384de7f36d3f764a4c4cdab6876804169896887d794bbc786b91f86d12fb5c"} Dec 03 09:38:58 crc kubenswrapper[4856]: I1203 09:38:58.941585 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c384de7f36d3f764a4c4cdab6876804169896887d794bbc786b91f86d12fb5c" Dec 03 09:38:58 crc kubenswrapper[4856]: I1203 09:38:58.971342 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.067632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities\") pod \"a41191a0-d806-47cc-a09a-e32d9586bb07\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.067741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmh94\" (UniqueName: \"kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94\") pod \"a41191a0-d806-47cc-a09a-e32d9586bb07\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.067862 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content\") pod \"a41191a0-d806-47cc-a09a-e32d9586bb07\" (UID: \"a41191a0-d806-47cc-a09a-e32d9586bb07\") " Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.070272 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities" (OuterVolumeSpecName: "utilities") pod "a41191a0-d806-47cc-a09a-e32d9586bb07" (UID: "a41191a0-d806-47cc-a09a-e32d9586bb07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.076655 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94" (OuterVolumeSpecName: "kube-api-access-rmh94") pod "a41191a0-d806-47cc-a09a-e32d9586bb07" (UID: "a41191a0-d806-47cc-a09a-e32d9586bb07"). InnerVolumeSpecName "kube-api-access-rmh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.126603 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a41191a0-d806-47cc-a09a-e32d9586bb07" (UID: "a41191a0-d806-47cc-a09a-e32d9586bb07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.170707 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.170761 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41191a0-d806-47cc-a09a-e32d9586bb07-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.170774 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmh94\" (UniqueName: \"kubernetes.io/projected/a41191a0-d806-47cc-a09a-e32d9586bb07-kube-api-access-rmh94\") on node \"crc\" DevicePath \"\"" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.953768 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtbsx" Dec 03 09:38:59 crc kubenswrapper[4856]: I1203 09:38:59.993481 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:39:00 crc kubenswrapper[4856]: I1203 09:39:00.005245 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rtbsx"] Dec 03 09:39:00 crc kubenswrapper[4856]: I1203 09:39:00.702952 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" path="/var/lib/kubelet/pods/a41191a0-d806-47cc-a09a-e32d9586bb07/volumes" Dec 03 09:39:07 crc kubenswrapper[4856]: I1203 09:39:07.058742 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9q8bs"] Dec 03 09:39:07 crc kubenswrapper[4856]: I1203 09:39:07.069716 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9q8bs"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.041316 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9jxlg"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.052973 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ztwpp"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.067926 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3de4-account-create-update-2hqfj"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.078563 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e2ae-account-create-update-jjs2q"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.087082 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ztwpp"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.097194 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9jxlg"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.110871 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e2ae-account-create-update-jjs2q"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.124329 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3de4-account-create-update-2hqfj"] Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.703582 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfde763-a165-46e4-82e3-c49f636a6486" path="/var/lib/kubelet/pods/2cfde763-a165-46e4-82e3-c49f636a6486/volumes" Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.704357 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdfeb32-52bc-47c4-b845-b4a9602fc64a" path="/var/lib/kubelet/pods/4bdfeb32-52bc-47c4-b845-b4a9602fc64a/volumes" Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.705001 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2" path="/var/lib/kubelet/pods/6fab7b5a-4a1f-4836-8f3e-6dfa295c28f2/volumes" Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.705659 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959b10d9-2ea2-41e5-ab5a-e69fb45ba078" path="/var/lib/kubelet/pods/959b10d9-2ea2-41e5-ab5a-e69fb45ba078/volumes" Dec 03 09:39:08 crc kubenswrapper[4856]: I1203 09:39:08.706940 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee726f5c-0148-49df-80ff-5e02896db9b5" path="/var/lib/kubelet/pods/ee726f5c-0148-49df-80ff-5e02896db9b5/volumes" Dec 03 09:39:09 crc kubenswrapper[4856]: I1203 09:39:09.078745 4856 generic.go:334] "Generic (PLEG): container finished" podID="1b94e685-696f-4e31-8296-a234c7767af2" containerID="31a57e40280909303ebdb4e8e2902c00e7b70409064003848a22070404c9e107" exitCode=0 Dec 03 09:39:09 crc kubenswrapper[4856]: I1203 09:39:09.078829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" event={"ID":"1b94e685-696f-4e31-8296-a234c7767af2","Type":"ContainerDied","Data":"31a57e40280909303ebdb4e8e2902c00e7b70409064003848a22070404c9e107"} Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.033943 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-09dd-account-create-update-qzcxv"] Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.047601 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-09dd-account-create-update-qzcxv"] Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.588341 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.649456 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key\") pod \"1b94e685-696f-4e31-8296-a234c7767af2\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.649589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory\") pod \"1b94e685-696f-4e31-8296-a234c7767af2\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.649686 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k98c6\" (UniqueName: \"kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6\") pod \"1b94e685-696f-4e31-8296-a234c7767af2\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.649771 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle\") pod \"1b94e685-696f-4e31-8296-a234c7767af2\" (UID: \"1b94e685-696f-4e31-8296-a234c7767af2\") " Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.658346 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6" (OuterVolumeSpecName: "kube-api-access-k98c6") pod "1b94e685-696f-4e31-8296-a234c7767af2" (UID: "1b94e685-696f-4e31-8296-a234c7767af2"). InnerVolumeSpecName "kube-api-access-k98c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.662056 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1b94e685-696f-4e31-8296-a234c7767af2" (UID: "1b94e685-696f-4e31-8296-a234c7767af2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.855027 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k98c6\" (UniqueName: \"kubernetes.io/projected/1b94e685-696f-4e31-8296-a234c7767af2-kube-api-access-k98c6\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.855073 4856 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.859457 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory" (OuterVolumeSpecName: "inventory") pod "1b94e685-696f-4e31-8296-a234c7767af2" (UID: "1b94e685-696f-4e31-8296-a234c7767af2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.868980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1b94e685-696f-4e31-8296-a234c7767af2" (UID: "1b94e685-696f-4e31-8296-a234c7767af2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.883601 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d88e41-24e3-42e5-915b-235ae9b3515a" path="/var/lib/kubelet/pods/03d88e41-24e3-42e5-915b-235ae9b3515a/volumes" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.957050 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:10 crc kubenswrapper[4856]: I1203 09:39:10.957432 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b94e685-696f-4e31-8296-a234c7767af2-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.106573 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" event={"ID":"1b94e685-696f-4e31-8296-a234c7767af2","Type":"ContainerDied","Data":"beb6d3cbf9beef453a21572195c41df287893069daebd00270897629d929a8b3"} Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.106666 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb6d3cbf9beef453a21572195c41df287893069daebd00270897629d929a8b3" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.106761 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bff68" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.215176 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2"] Dec 03 09:39:11 crc kubenswrapper[4856]: E1203 09:39:11.215893 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="extract-utilities" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.215921 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="extract-utilities" Dec 03 09:39:11 crc kubenswrapper[4856]: E1203 09:39:11.215945 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="extract-content" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.215955 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="extract-content" Dec 03 09:39:11 crc kubenswrapper[4856]: E1203 09:39:11.215970 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="registry-server" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.215980 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="registry-server" Dec 03 09:39:11 crc kubenswrapper[4856]: E1203 09:39:11.215995 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b94e685-696f-4e31-8296-a234c7767af2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.216008 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b94e685-696f-4e31-8296-a234c7767af2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.216349 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41191a0-d806-47cc-a09a-e32d9586bb07" containerName="registry-server" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.216396 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b94e685-696f-4e31-8296-a234c7767af2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.217427 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.220851 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.221210 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.221448 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.223559 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.244618 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2"] Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.266115 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.266422 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2spgk\" (UniqueName: \"kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.266521 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.369209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.370525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2spgk\" (UniqueName: \"kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.370612 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.376563 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.376652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.401031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2spgk\" (UniqueName: \"kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.446353 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.449267 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.469751 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.472639 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.472778 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.472905 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbp4v\" (UniqueName: \"kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.547954 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.574851 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbp4v\" (UniqueName: \"kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.575009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.575079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.575861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.576078 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.594993 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbp4v\" (UniqueName: \"kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v\") pod \"redhat-marketplace-xs8c2\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:11 crc kubenswrapper[4856]: I1203 09:39:11.823320 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:12 crc kubenswrapper[4856]: I1203 09:39:12.180002 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2"] Dec 03 09:39:12 crc kubenswrapper[4856]: I1203 09:39:12.360582 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:12 crc kubenswrapper[4856]: W1203 09:39:12.364574 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005d117a_cc9b_4f39_a9f7_c98b4915fc2e.slice/crio-88c27a6446165ba09c7a60f76ab910c58a59acf29c882fe892ad6c067e5a6d7b WatchSource:0}: Error finding container 88c27a6446165ba09c7a60f76ab910c58a59acf29c882fe892ad6c067e5a6d7b: Status 404 returned error can't find the container with id 88c27a6446165ba09c7a60f76ab910c58a59acf29c882fe892ad6c067e5a6d7b Dec 03 09:39:12 crc kubenswrapper[4856]: I1203 09:39:12.667081 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.145518 4856 generic.go:334] "Generic (PLEG): container finished" podID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerID="e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2" exitCode=0 Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.145638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerDied","Data":"e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2"} Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.146143 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerStarted","Data":"88c27a6446165ba09c7a60f76ab910c58a59acf29c882fe892ad6c067e5a6d7b"} Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.150504 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" event={"ID":"a22816e8-f368-454d-93c3-762e0d5e88d7","Type":"ContainerStarted","Data":"03dacda2a64cff52eb135c413d46382919ebc108f5e30c2db67ee9c72d3364b2"} Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.150584 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" event={"ID":"a22816e8-f368-454d-93c3-762e0d5e88d7","Type":"ContainerStarted","Data":"7fdb8c6b1fe38ed4a389e344a8c4b3c3b41b2ec0fbec1a5bbbc7186080d19dbe"} Dec 03 09:39:13 crc kubenswrapper[4856]: I1203 09:39:13.198519 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" podStartSLOduration=1.719683684 podStartE2EDuration="2.198491771s" podCreationTimestamp="2025-12-03 09:39:11 +0000 UTC" firstStartedPulling="2025-12-03 09:39:12.184863666 +0000 UTC m=+1620.367756007" lastFinishedPulling="2025-12-03 09:39:12.663671783 +0000 UTC m=+1620.846564094" observedRunningTime="2025-12-03 09:39:13.187246721 +0000 UTC m=+1621.370139022" watchObservedRunningTime="2025-12-03 09:39:13.198491771 +0000 UTC m=+1621.381384072" Dec 03 09:39:14 crc kubenswrapper[4856]: I1203 09:39:14.167598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerStarted","Data":"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0"} Dec 03 09:39:15 crc kubenswrapper[4856]: I1203 09:39:15.179221 4856 generic.go:334] "Generic (PLEG): container finished" podID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerID="775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0" exitCode=0 Dec 03 09:39:15 crc kubenswrapper[4856]: I1203 09:39:15.179348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerDied","Data":"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0"} Dec 03 09:39:16 crc kubenswrapper[4856]: I1203 09:39:16.196915 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerStarted","Data":"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8"} Dec 03 09:39:16 crc kubenswrapper[4856]: I1203 09:39:16.226686 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xs8c2" podStartSLOduration=2.719316098 podStartE2EDuration="5.226665307s" podCreationTimestamp="2025-12-03 09:39:11 +0000 UTC" firstStartedPulling="2025-12-03 09:39:13.14833445 +0000 UTC m=+1621.331226761" lastFinishedPulling="2025-12-03 09:39:15.655683669 +0000 UTC m=+1623.838575970" observedRunningTime="2025-12-03 09:39:16.217527032 +0000 UTC m=+1624.400419333" watchObservedRunningTime="2025-12-03 09:39:16.226665307 +0000 UTC m=+1624.409557608" Dec 03 09:39:21 crc kubenswrapper[4856]: I1203 09:39:21.823983 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:21 crc kubenswrapper[4856]: I1203 09:39:21.824719 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:21 crc kubenswrapper[4856]: I1203 09:39:21.876820 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:22 crc kubenswrapper[4856]: I1203 09:39:22.369619 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:22 crc kubenswrapper[4856]: I1203 09:39:22.430517 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:24 crc kubenswrapper[4856]: I1203 09:39:24.430031 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xs8c2" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="registry-server" containerID="cri-o://bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8" gracePeriod=2 Dec 03 09:39:24 crc kubenswrapper[4856]: I1203 09:39:24.966530 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.069233 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content\") pod \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.069766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbp4v\" (UniqueName: \"kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v\") pod \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.070007 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities\") pod \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\" (UID: \"005d117a-cc9b-4f39-a9f7-c98b4915fc2e\") " Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.071009 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities" (OuterVolumeSpecName: "utilities") pod "005d117a-cc9b-4f39-a9f7-c98b4915fc2e" (UID: "005d117a-cc9b-4f39-a9f7-c98b4915fc2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.077007 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v" (OuterVolumeSpecName: "kube-api-access-qbp4v") pod "005d117a-cc9b-4f39-a9f7-c98b4915fc2e" (UID: "005d117a-cc9b-4f39-a9f7-c98b4915fc2e"). InnerVolumeSpecName "kube-api-access-qbp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.091210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "005d117a-cc9b-4f39-a9f7-c98b4915fc2e" (UID: "005d117a-cc9b-4f39-a9f7-c98b4915fc2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.173662 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.173738 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbp4v\" (UniqueName: \"kubernetes.io/projected/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-kube-api-access-qbp4v\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.173758 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005d117a-cc9b-4f39-a9f7-c98b4915fc2e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.444193 4856 generic.go:334] "Generic (PLEG): container finished" podID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerID="bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8" exitCode=0 Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.444276 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xs8c2" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.444306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerDied","Data":"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8"} Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.445544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xs8c2" event={"ID":"005d117a-cc9b-4f39-a9f7-c98b4915fc2e","Type":"ContainerDied","Data":"88c27a6446165ba09c7a60f76ab910c58a59acf29c882fe892ad6c067e5a6d7b"} Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.445575 4856 scope.go:117] "RemoveContainer" containerID="bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.492114 4856 scope.go:117] "RemoveContainer" containerID="775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.495240 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.518621 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xs8c2"] Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.525021 4856 scope.go:117] "RemoveContainer" containerID="e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.580318 4856 scope.go:117] "RemoveContainer" containerID="bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8" Dec 03 09:39:25 crc kubenswrapper[4856]: E1203 09:39:25.581082 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8\": container with ID starting with bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8 not found: ID does not exist" containerID="bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.581122 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8"} err="failed to get container status \"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8\": rpc error: code = NotFound desc = could not find container \"bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8\": container with ID starting with bcb97eeca65fe12b8ba8335500863fc086e61c00cd36ed4485458c5ac452abe8 not found: ID does not exist" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.581151 4856 scope.go:117] "RemoveContainer" containerID="775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0" Dec 03 09:39:25 crc kubenswrapper[4856]: E1203 09:39:25.581496 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0\": container with ID starting with 775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0 not found: ID does not exist" containerID="775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.581588 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0"} err="failed to get container status \"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0\": rpc error: code = NotFound desc = could not find container \"775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0\": container with ID starting with 775ad4a91682dbc0f5198994f27699a67b3dca1556fd7d9fe8b84181ba9384e0 not found: ID does not exist" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.581669 4856 scope.go:117] "RemoveContainer" containerID="e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2" Dec 03 09:39:25 crc kubenswrapper[4856]: E1203 09:39:25.582145 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2\": container with ID starting with e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2 not found: ID does not exist" containerID="e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2" Dec 03 09:39:25 crc kubenswrapper[4856]: I1203 09:39:25.582206 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2"} err="failed to get container status \"e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2\": rpc error: code = NotFound desc = could not find container \"e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2\": container with ID starting with e491a4b57550402c2ced40feb00ecf0fc818434937b63f1144836a92cb3fb0f2 not found: ID does not exist" Dec 03 09:39:26 crc kubenswrapper[4856]: I1203 09:39:26.708300 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" path="/var/lib/kubelet/pods/005d117a-cc9b-4f39-a9f7-c98b4915fc2e/volumes" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.193107 4856 scope.go:117] "RemoveContainer" containerID="33876cc33dbfef68684e1e30c058df554b020fc18cf34bce65ac749ba1c1ec34" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.223407 4856 scope.go:117] "RemoveContainer" containerID="9b91890994058ed2d1cea3a178dad0ce7f8f53d2ff22725ed24455e7d3427931" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.284333 4856 scope.go:117] "RemoveContainer" containerID="d06d4fbda9c460bc95d79565168ad295eaf3ac130a340bb3aa654065b5218f92" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.317705 4856 scope.go:117] "RemoveContainer" containerID="bd7a384a97c177909e0bac225748dc28952cac5c7fa4083759eecc9a1d3c0b83" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.379190 4856 scope.go:117] "RemoveContainer" containerID="dea1e3a303ba7056d0aa989b5368aeb40fa4a40d4ec5d13ef0226b5821b99c60" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.420392 4856 scope.go:117] "RemoveContainer" containerID="6aa7c61cd434bd3cbf075b0a3ad31700da174e60ed3f533130ddab4b75599385" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.477454 4856 scope.go:117] "RemoveContainer" containerID="f59818e3a552a061c82fc8ec0617a63d171c54703b72ad3e7958c61fa7076684" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.513722 4856 scope.go:117] "RemoveContainer" containerID="b96acb939b4cc0c318b8d5095eb6d6d124c01b211d726e06f946e5def63cf23f" Dec 03 09:39:36 crc kubenswrapper[4856]: I1203 09:39:36.537049 4856 scope.go:117] "RemoveContainer" containerID="af2df5ec55472255e2b704174db5a220a62d41003698b87d32f1874644d4add1" Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.063512 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e469-account-create-update-fxbxd"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.078454 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-chdvr"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.090397 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f497-account-create-update-qlnhj"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.101498 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cae5-account-create-update-mjb74"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.113321 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cpm64"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.123511 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cdnzk"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.132206 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f497-account-create-update-qlnhj"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.140718 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e469-account-create-update-fxbxd"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.153102 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-chdvr"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.163614 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cpm64"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.173952 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cdnzk"] Dec 03 09:39:39 crc kubenswrapper[4856]: I1203 09:39:39.186960 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-cae5-account-create-update-mjb74"] Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.701795 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b57444-8f69-47dd-baf7-ecdae356f3bd" path="/var/lib/kubelet/pods/34b57444-8f69-47dd-baf7-ecdae356f3bd/volumes" Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.702738 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e15650-511c-4fa4-b083-95f4c90e5833" path="/var/lib/kubelet/pods/56e15650-511c-4fa4-b083-95f4c90e5833/volumes" Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.703324 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648722b8-a2df-4dac-94df-6dd3aa1db7be" path="/var/lib/kubelet/pods/648722b8-a2df-4dac-94df-6dd3aa1db7be/volumes" Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.703957 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85130774-d029-4aa8-b4ab-a06843b68f0a" path="/var/lib/kubelet/pods/85130774-d029-4aa8-b4ab-a06843b68f0a/volumes" Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.705261 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9150aa96-ff02-409b-ac82-c6e3e88d54cc" path="/var/lib/kubelet/pods/9150aa96-ff02-409b-ac82-c6e3e88d54cc/volumes" Dec 03 09:39:40 crc kubenswrapper[4856]: I1203 09:39:40.705949 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17424d3-bc7c-4a17-890b-5ddb43c4004b" path="/var/lib/kubelet/pods/d17424d3-bc7c-4a17-890b-5ddb43c4004b/volumes" Dec 03 09:39:44 crc kubenswrapper[4856]: I1203 09:39:44.036054 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-d4pvx"] Dec 03 09:39:44 crc kubenswrapper[4856]: I1203 09:39:44.053136 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-d4pvx"] Dec 03 09:39:44 crc kubenswrapper[4856]: I1203 09:39:44.719416 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7667fa-862b-4ac3-a7b1-92e336d1c63d" path="/var/lib/kubelet/pods/9a7667fa-862b-4ac3-a7b1-92e336d1c63d/volumes" Dec 03 09:39:52 crc kubenswrapper[4856]: I1203 09:39:52.759595 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:39:52 crc kubenswrapper[4856]: I1203 09:39:52.761101 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.130595 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:19 crc kubenswrapper[4856]: E1203 09:40:19.132252 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="registry-server" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.132276 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="registry-server" Dec 03 09:40:19 crc kubenswrapper[4856]: E1203 09:40:19.132318 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="extract-utilities" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.132327 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="extract-utilities" Dec 03 09:40:19 crc kubenswrapper[4856]: E1203 09:40:19.132342 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="extract-content" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.132349 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="extract-content" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.132624 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d117a-cc9b-4f39-a9f7-c98b4915fc2e" containerName="registry-server" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.134862 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.148115 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.237989 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.238445 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848nr\" (UniqueName: \"kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.238581 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.340422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.340512 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848nr\" (UniqueName: \"kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.340568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.341130 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.345288 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.364113 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848nr\" (UniqueName: \"kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr\") pod \"certified-operators-psr8r\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:19 crc kubenswrapper[4856]: I1203 09:40:19.462403 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:20 crc kubenswrapper[4856]: I1203 09:40:20.092179 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:21 crc kubenswrapper[4856]: I1203 09:40:21.101693 4856 generic.go:334] "Generic (PLEG): container finished" podID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerID="361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc" exitCode=0 Dec 03 09:40:21 crc kubenswrapper[4856]: I1203 09:40:21.102114 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerDied","Data":"361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc"} Dec 03 09:40:21 crc kubenswrapper[4856]: I1203 09:40:21.102157 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerStarted","Data":"a89b74369bc963b2355b1aecbb29cfe088edb2cb23c3cb8527339bb71ed4146b"} Dec 03 09:40:22 crc kubenswrapper[4856]: I1203 09:40:22.117276 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerStarted","Data":"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260"} Dec 03 09:40:22 crc kubenswrapper[4856]: I1203 09:40:22.759339 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:40:22 crc kubenswrapper[4856]: I1203 09:40:22.759946 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:40:23 crc kubenswrapper[4856]: I1203 09:40:23.054348 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zpbhz"] Dec 03 09:40:23 crc kubenswrapper[4856]: I1203 09:40:23.066067 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zpbhz"] Dec 03 09:40:23 crc kubenswrapper[4856]: I1203 09:40:23.135407 4856 generic.go:334] "Generic (PLEG): container finished" podID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerID="7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260" exitCode=0 Dec 03 09:40:23 crc kubenswrapper[4856]: I1203 09:40:23.135465 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerDied","Data":"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260"} Dec 03 09:40:24 crc kubenswrapper[4856]: I1203 09:40:24.702549 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a337aa7-570e-40fe-86ca-faee49a09165" path="/var/lib/kubelet/pods/3a337aa7-570e-40fe-86ca-faee49a09165/volumes" Dec 03 09:40:25 crc kubenswrapper[4856]: I1203 09:40:25.158925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerStarted","Data":"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e"} Dec 03 09:40:25 crc kubenswrapper[4856]: I1203 09:40:25.182757 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psr8r" podStartSLOduration=3.483821538 podStartE2EDuration="6.182734398s" podCreationTimestamp="2025-12-03 09:40:19 +0000 UTC" firstStartedPulling="2025-12-03 09:40:21.107205842 +0000 UTC m=+1689.290098143" lastFinishedPulling="2025-12-03 09:40:23.806118702 +0000 UTC m=+1691.989011003" observedRunningTime="2025-12-03 09:40:25.177552235 +0000 UTC m=+1693.360444536" watchObservedRunningTime="2025-12-03 09:40:25.182734398 +0000 UTC m=+1693.365626699" Dec 03 09:40:29 crc kubenswrapper[4856]: I1203 09:40:29.462738 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:29 crc kubenswrapper[4856]: I1203 09:40:29.463172 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:29 crc kubenswrapper[4856]: I1203 09:40:29.513999 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:30 crc kubenswrapper[4856]: I1203 09:40:30.265669 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:30 crc kubenswrapper[4856]: I1203 09:40:30.320633 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:32 crc kubenswrapper[4856]: I1203 09:40:32.236415 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-psr8r" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="registry-server" containerID="cri-o://ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e" gracePeriod=2 Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.245188 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.250366 4856 generic.go:334] "Generic (PLEG): container finished" podID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerID="ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e" exitCode=0 Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.250429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerDied","Data":"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e"} Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.250467 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psr8r" event={"ID":"71424f2a-945d-4e2a-b9df-e949c612ed92","Type":"ContainerDied","Data":"a89b74369bc963b2355b1aecbb29cfe088edb2cb23c3cb8527339bb71ed4146b"} Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.250491 4856 scope.go:117] "RemoveContainer" containerID="ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.297447 4856 scope.go:117] "RemoveContainer" containerID="7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.339519 4856 scope.go:117] "RemoveContainer" containerID="361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.377372 4856 scope.go:117] "RemoveContainer" containerID="ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e" Dec 03 09:40:33 crc kubenswrapper[4856]: E1203 09:40:33.378615 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e\": container with ID starting with ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e not found: ID does not exist" containerID="ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.378683 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e"} err="failed to get container status \"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e\": rpc error: code = NotFound desc = could not find container \"ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e\": container with ID starting with ab94cdd2c0052cd37558505194b20787362334c8a13720b2bac922957084586e not found: ID does not exist" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.378727 4856 scope.go:117] "RemoveContainer" containerID="7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260" Dec 03 09:40:33 crc kubenswrapper[4856]: E1203 09:40:33.379286 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260\": container with ID starting with 7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260 not found: ID does not exist" containerID="7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.379324 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260"} err="failed to get container status \"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260\": rpc error: code = NotFound desc = could not find container \"7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260\": container with ID starting with 7cf1b80caa26c0103db9fc0a981dd8494e35a9e37695013299927e9ee9296260 not found: ID does not exist" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.379339 4856 scope.go:117] "RemoveContainer" containerID="361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc" Dec 03 09:40:33 crc kubenswrapper[4856]: E1203 09:40:33.379759 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc\": container with ID starting with 361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc not found: ID does not exist" containerID="361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.379788 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc"} err="failed to get container status \"361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc\": rpc error: code = NotFound desc = could not find container \"361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc\": container with ID starting with 361f372f93431714d64e742036c0af133e8817a08418f73f9b93c87edccfd5cc not found: ID does not exist" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.390977 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content\") pod \"71424f2a-945d-4e2a-b9df-e949c612ed92\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.391298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848nr\" (UniqueName: \"kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr\") pod \"71424f2a-945d-4e2a-b9df-e949c612ed92\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.391447 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities\") pod \"71424f2a-945d-4e2a-b9df-e949c612ed92\" (UID: \"71424f2a-945d-4e2a-b9df-e949c612ed92\") " Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.393298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities" (OuterVolumeSpecName: "utilities") pod "71424f2a-945d-4e2a-b9df-e949c612ed92" (UID: "71424f2a-945d-4e2a-b9df-e949c612ed92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.400956 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr" (OuterVolumeSpecName: "kube-api-access-848nr") pod "71424f2a-945d-4e2a-b9df-e949c612ed92" (UID: "71424f2a-945d-4e2a-b9df-e949c612ed92"). InnerVolumeSpecName "kube-api-access-848nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.456075 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71424f2a-945d-4e2a-b9df-e949c612ed92" (UID: "71424f2a-945d-4e2a-b9df-e949c612ed92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.494520 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.494560 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71424f2a-945d-4e2a-b9df-e949c612ed92-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:40:33 crc kubenswrapper[4856]: I1203 09:40:33.494573 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848nr\" (UniqueName: \"kubernetes.io/projected/71424f2a-945d-4e2a-b9df-e949c612ed92-kube-api-access-848nr\") on node \"crc\" DevicePath \"\"" Dec 03 09:40:34 crc kubenswrapper[4856]: I1203 09:40:34.262401 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psr8r" Dec 03 09:40:34 crc kubenswrapper[4856]: I1203 09:40:34.304985 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:34 crc kubenswrapper[4856]: I1203 09:40:34.315946 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-psr8r"] Dec 03 09:40:34 crc kubenswrapper[4856]: I1203 09:40:34.709070 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" path="/var/lib/kubelet/pods/71424f2a-945d-4e2a-b9df-e949c612ed92/volumes" Dec 03 09:40:36 crc kubenswrapper[4856]: I1203 09:40:36.822876 4856 scope.go:117] "RemoveContainer" containerID="e1bdb6d8f4f2c275ca11b7cadf14654ac4813526bb5d66bea88785b8691b9d1b" Dec 03 09:40:36 crc kubenswrapper[4856]: I1203 09:40:36.865052 4856 scope.go:117] "RemoveContainer" containerID="496089bb145ff7a11c5cd7656787b6daa1131556a940186646fefe860d7691ae" Dec 03 09:40:36 crc kubenswrapper[4856]: I1203 09:40:36.901351 4856 scope.go:117] "RemoveContainer" containerID="edd9961a3c83cd58c564e9958e939a141febd21774efaba92d67f6de4463d585" Dec 03 09:40:36 crc kubenswrapper[4856]: I1203 09:40:36.949512 4856 scope.go:117] "RemoveContainer" containerID="abb8635ad093dfabd3555fb1ad751412d77bd3c30b503ce538335dcc43a91942" Dec 03 09:40:37 crc kubenswrapper[4856]: I1203 09:40:37.008576 4856 scope.go:117] "RemoveContainer" containerID="a0e22dfd2895630267419d5fd9e42c8158232062f5f297c0fe93ef13cfd88a6b" Dec 03 09:40:37 crc kubenswrapper[4856]: I1203 09:40:37.056414 4856 scope.go:117] "RemoveContainer" containerID="fffc8818c4098d7e15af236c5466452bf7e309e3b4c79dbeed3433264201beea" Dec 03 09:40:37 crc kubenswrapper[4856]: I1203 09:40:37.111273 4856 scope.go:117] "RemoveContainer" containerID="17538324df8bc7553ce59c1956db2e2d74a61061088255c4a13b078565f90ccb" Dec 03 09:40:37 crc kubenswrapper[4856]: I1203 09:40:37.175166 4856 scope.go:117] "RemoveContainer" containerID="4529a1191e68ea3b2598e70b001acfdfe4d15b2e330d4a446ffb625c49e6b2b0" Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.049831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k5kxs"] Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.071216 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2dnjq"] Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.084422 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2dnjq"] Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.093324 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k5kxs"] Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.701677 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fa448b-f085-4377-a7d2-a4e078ae00c3" path="/var/lib/kubelet/pods/17fa448b-f085-4377-a7d2-a4e078ae00c3/volumes" Dec 03 09:40:50 crc kubenswrapper[4856]: I1203 09:40:50.702911 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318afd46-7100-422f-983d-0a9c87cc38c6" path="/var/lib/kubelet/pods/318afd46-7100-422f-983d-0a9c87cc38c6/volumes" Dec 03 09:40:52 crc kubenswrapper[4856]: I1203 09:40:52.759519 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:40:52 crc kubenswrapper[4856]: I1203 09:40:52.760149 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:40:52 crc kubenswrapper[4856]: I1203 09:40:52.760227 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:40:52 crc kubenswrapper[4856]: I1203 09:40:52.761126 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:40:52 crc kubenswrapper[4856]: I1203 09:40:52.761194 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" gracePeriod=600 Dec 03 09:40:52 crc kubenswrapper[4856]: E1203 09:40:52.904033 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:40:53 crc kubenswrapper[4856]: I1203 09:40:53.476592 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" exitCode=0 Dec 03 09:40:53 crc kubenswrapper[4856]: I1203 09:40:53.476742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149"} Dec 03 09:40:53 crc kubenswrapper[4856]: I1203 09:40:53.477102 4856 scope.go:117] "RemoveContainer" containerID="897525ac673aefdc5e9d8d8ee22fa5a3cb30ead572d739a9785daddd1f821527" Dec 03 09:40:53 crc kubenswrapper[4856]: I1203 09:40:53.478200 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:40:53 crc kubenswrapper[4856]: E1203 09:40:53.478520 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:04 crc kubenswrapper[4856]: I1203 09:41:04.691081 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:41:04 crc kubenswrapper[4856]: E1203 09:41:04.694341 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:06 crc kubenswrapper[4856]: I1203 09:41:06.616706 4856 generic.go:334] "Generic (PLEG): container finished" podID="a22816e8-f368-454d-93c3-762e0d5e88d7" containerID="03dacda2a64cff52eb135c413d46382919ebc108f5e30c2db67ee9c72d3364b2" exitCode=0 Dec 03 09:41:06 crc kubenswrapper[4856]: I1203 09:41:06.618036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" event={"ID":"a22816e8-f368-454d-93c3-762e0d5e88d7","Type":"ContainerDied","Data":"03dacda2a64cff52eb135c413d46382919ebc108f5e30c2db67ee9c72d3364b2"} Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.126980 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.226192 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key\") pod \"a22816e8-f368-454d-93c3-762e0d5e88d7\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.226433 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2spgk\" (UniqueName: \"kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk\") pod \"a22816e8-f368-454d-93c3-762e0d5e88d7\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.226581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory\") pod \"a22816e8-f368-454d-93c3-762e0d5e88d7\" (UID: \"a22816e8-f368-454d-93c3-762e0d5e88d7\") " Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.237305 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk" (OuterVolumeSpecName: "kube-api-access-2spgk") pod "a22816e8-f368-454d-93c3-762e0d5e88d7" (UID: "a22816e8-f368-454d-93c3-762e0d5e88d7"). InnerVolumeSpecName "kube-api-access-2spgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.263846 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory" (OuterVolumeSpecName: "inventory") pod "a22816e8-f368-454d-93c3-762e0d5e88d7" (UID: "a22816e8-f368-454d-93c3-762e0d5e88d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.264369 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a22816e8-f368-454d-93c3-762e0d5e88d7" (UID: "a22816e8-f368-454d-93c3-762e0d5e88d7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.328534 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2spgk\" (UniqueName: \"kubernetes.io/projected/a22816e8-f368-454d-93c3-762e0d5e88d7-kube-api-access-2spgk\") on node \"crc\" DevicePath \"\"" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.329019 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.329033 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a22816e8-f368-454d-93c3-762e0d5e88d7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.645215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" event={"ID":"a22816e8-f368-454d-93c3-762e0d5e88d7","Type":"ContainerDied","Data":"7fdb8c6b1fe38ed4a389e344a8c4b3c3b41b2ec0fbec1a5bbbc7186080d19dbe"} Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.645298 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fdb8c6b1fe38ed4a389e344a8c4b3c3b41b2ec0fbec1a5bbbc7186080d19dbe" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.645429 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.745079 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg"] Dec 03 09:41:08 crc kubenswrapper[4856]: E1203 09:41:08.745705 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="extract-content" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.745725 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="extract-content" Dec 03 09:41:08 crc kubenswrapper[4856]: E1203 09:41:08.745751 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22816e8-f368-454d-93c3-762e0d5e88d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.745761 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22816e8-f368-454d-93c3-762e0d5e88d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 09:41:08 crc kubenswrapper[4856]: E1203 09:41:08.745771 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="registry-server" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.745777 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="registry-server" Dec 03 09:41:08 crc kubenswrapper[4856]: E1203 09:41:08.745788 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="extract-utilities" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.745794 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="extract-utilities" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.746022 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="71424f2a-945d-4e2a-b9df-e949c612ed92" containerName="registry-server" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.746057 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22816e8-f368-454d-93c3-762e0d5e88d7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.747016 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.749873 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.750355 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.750862 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.750931 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.764051 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg"] Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.840444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7xm\" (UniqueName: \"kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.840598 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.841125 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.943874 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7xm\" (UniqueName: \"kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.943947 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.944094 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.950164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.950345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:08 crc kubenswrapper[4856]: I1203 09:41:08.961466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7xm\" (UniqueName: \"kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8srhg\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:09 crc kubenswrapper[4856]: I1203 09:41:09.072376 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l9lg2"] Dec 03 09:41:09 crc kubenswrapper[4856]: I1203 09:41:09.078729 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:41:09 crc kubenswrapper[4856]: I1203 09:41:09.101564 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l9lg2"] Dec 03 09:41:09 crc kubenswrapper[4856]: I1203 09:41:09.748751 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg"] Dec 03 09:41:10 crc kubenswrapper[4856]: I1203 09:41:10.039713 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vxpjp"] Dec 03 09:41:10 crc kubenswrapper[4856]: I1203 09:41:10.050890 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vxpjp"] Dec 03 09:41:10 crc kubenswrapper[4856]: I1203 09:41:10.671341 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" event={"ID":"1031be18-e812-46f1-9377-792b9dd841c0","Type":"ContainerStarted","Data":"7adcc4a97fb95dbd35c45f0f2ebf3a8309f08a608082c02e8a6f2485415349b3"} Dec 03 09:41:10 crc kubenswrapper[4856]: I1203 09:41:10.702628 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0" path="/var/lib/kubelet/pods/3ab20ccb-73a3-4d7d-ba56-f778d4cee6d0/volumes" Dec 03 09:41:10 crc kubenswrapper[4856]: I1203 09:41:10.704150 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f84948-d98d-443d-9055-b4f4d28369b4" path="/var/lib/kubelet/pods/a3f84948-d98d-443d-9055-b4f4d28369b4/volumes" Dec 03 09:41:11 crc kubenswrapper[4856]: I1203 09:41:11.043923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jpsww"] Dec 03 09:41:11 crc kubenswrapper[4856]: I1203 09:41:11.052974 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jpsww"] Dec 03 09:41:11 crc kubenswrapper[4856]: I1203 09:41:11.683120 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" event={"ID":"1031be18-e812-46f1-9377-792b9dd841c0","Type":"ContainerStarted","Data":"a67afbf707f9ece930d2695de56fd41ce4eee0784b6db151e2a0ae3929c8ac6f"} Dec 03 09:41:11 crc kubenswrapper[4856]: I1203 09:41:11.712226 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" podStartSLOduration=3.048813461 podStartE2EDuration="3.712201881s" podCreationTimestamp="2025-12-03 09:41:08 +0000 UTC" firstStartedPulling="2025-12-03 09:41:09.747430822 +0000 UTC m=+1737.930323123" lastFinishedPulling="2025-12-03 09:41:10.410819242 +0000 UTC m=+1738.593711543" observedRunningTime="2025-12-03 09:41:11.705046668 +0000 UTC m=+1739.887938969" watchObservedRunningTime="2025-12-03 09:41:11.712201881 +0000 UTC m=+1739.895094182" Dec 03 09:41:12 crc kubenswrapper[4856]: I1203 09:41:12.703554 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e3fff2-b7c3-4cd4-b62e-83af7da6e87b" path="/var/lib/kubelet/pods/74e3fff2-b7c3-4cd4-b62e-83af7da6e87b/volumes" Dec 03 09:41:15 crc kubenswrapper[4856]: I1203 09:41:15.689974 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:41:15 crc kubenswrapper[4856]: E1203 09:41:15.692617 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:27 crc kubenswrapper[4856]: I1203 09:41:27.689360 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:41:27 crc kubenswrapper[4856]: E1203 09:41:27.690408 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:37 crc kubenswrapper[4856]: I1203 09:41:37.386005 4856 scope.go:117] "RemoveContainer" containerID="6c417a1d3c1e90d9d598e43f026833def2a0ca935265a2f958419f54a4dd891a" Dec 03 09:41:37 crc kubenswrapper[4856]: I1203 09:41:37.462715 4856 scope.go:117] "RemoveContainer" containerID="87d0121ed930968e361a811d3a8f9e0024155ec9e7462791a5ceb1a36a1128db" Dec 03 09:41:37 crc kubenswrapper[4856]: I1203 09:41:37.529865 4856 scope.go:117] "RemoveContainer" containerID="2a064e72a881dcc83802686f8d296d22eac5f29ee3afa83933593ff5378ac8ad" Dec 03 09:41:37 crc kubenswrapper[4856]: I1203 09:41:37.573728 4856 scope.go:117] "RemoveContainer" containerID="700b4bc7724023764e2ed91f12e99f5a84ee1952b947fa06b5e647b117fbf740" Dec 03 09:41:37 crc kubenswrapper[4856]: I1203 09:41:37.614695 4856 scope.go:117] "RemoveContainer" containerID="c6eda802efe3ea79c12bda1a57d85ecebecb2503601b65c5cfc8e48473010979" Dec 03 09:41:39 crc kubenswrapper[4856]: I1203 09:41:39.689888 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:41:39 crc kubenswrapper[4856]: E1203 09:41:39.690695 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:50 crc kubenswrapper[4856]: I1203 09:41:50.690034 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:41:50 crc kubenswrapper[4856]: E1203 09:41:50.691213 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.060034 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f837-account-create-update-7xhll"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.069873 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c5ktp"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.079972 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wqjjd"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.092329 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-36d7-account-create-update-5hhw7"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.102164 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-72e7-account-create-update-w8vbn"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.111973 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-482j6"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.121125 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c5ktp"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.128368 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-482j6"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.137222 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wqjjd"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.146948 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-72e7-account-create-update-w8vbn"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.156096 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f837-account-create-update-7xhll"] Dec 03 09:41:55 crc kubenswrapper[4856]: I1203 09:41:55.165078 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-36d7-account-create-update-5hhw7"] Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.703289 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037e7d3b-3523-4406-b7d6-39dc9c9256c3" path="/var/lib/kubelet/pods/037e7d3b-3523-4406-b7d6-39dc9c9256c3/volumes" Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.704942 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2927120f-ce3e-4ca6-8522-80b99afcdcc8" path="/var/lib/kubelet/pods/2927120f-ce3e-4ca6-8522-80b99afcdcc8/volumes" Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.705830 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea33628-dc6c-486d-8214-0c17593c5c65" path="/var/lib/kubelet/pods/3ea33628-dc6c-486d-8214-0c17593c5c65/volumes" Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.706623 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1" path="/var/lib/kubelet/pods/c254bdd1-0dcf-46df-9f1d-d3d0e54a8fb1/volumes" Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.708547 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e53d88-906a-49d5-8687-bac531c74375" path="/var/lib/kubelet/pods/d1e53d88-906a-49d5-8687-bac531c74375/volumes" Dec 03 09:41:56 crc kubenswrapper[4856]: I1203 09:41:56.709273 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc2b70e-a05b-4d56-87f6-2656e84d9a77" path="/var/lib/kubelet/pods/efc2b70e-a05b-4d56-87f6-2656e84d9a77/volumes" Dec 03 09:42:05 crc kubenswrapper[4856]: I1203 09:42:05.689559 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:42:05 crc kubenswrapper[4856]: E1203 09:42:05.690719 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:42:18 crc kubenswrapper[4856]: I1203 09:42:18.689701 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:42:18 crc kubenswrapper[4856]: E1203 09:42:18.690844 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:42:28 crc kubenswrapper[4856]: I1203 09:42:28.051313 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2jx7k"] Dec 03 09:42:28 crc kubenswrapper[4856]: I1203 09:42:28.061782 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2jx7k"] Dec 03 09:42:28 crc kubenswrapper[4856]: I1203 09:42:28.487883 4856 generic.go:334] "Generic (PLEG): container finished" podID="1031be18-e812-46f1-9377-792b9dd841c0" containerID="a67afbf707f9ece930d2695de56fd41ce4eee0784b6db151e2a0ae3929c8ac6f" exitCode=0 Dec 03 09:42:28 crc kubenswrapper[4856]: I1203 09:42:28.487961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" event={"ID":"1031be18-e812-46f1-9377-792b9dd841c0","Type":"ContainerDied","Data":"a67afbf707f9ece930d2695de56fd41ce4eee0784b6db151e2a0ae3929c8ac6f"} Dec 03 09:42:28 crc kubenswrapper[4856]: I1203 09:42:28.703131 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bb4e2f-c274-4f88-a8b4-de6e9b737113" path="/var/lib/kubelet/pods/20bb4e2f-c274-4f88-a8b4-de6e9b737113/volumes" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.024127 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.079731 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory\") pod \"1031be18-e812-46f1-9377-792b9dd841c0\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.079793 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7xm\" (UniqueName: \"kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm\") pod \"1031be18-e812-46f1-9377-792b9dd841c0\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.079972 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key\") pod \"1031be18-e812-46f1-9377-792b9dd841c0\" (UID: \"1031be18-e812-46f1-9377-792b9dd841c0\") " Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.093164 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm" (OuterVolumeSpecName: "kube-api-access-xf7xm") pod "1031be18-e812-46f1-9377-792b9dd841c0" (UID: "1031be18-e812-46f1-9377-792b9dd841c0"). InnerVolumeSpecName "kube-api-access-xf7xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.110113 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory" (OuterVolumeSpecName: "inventory") pod "1031be18-e812-46f1-9377-792b9dd841c0" (UID: "1031be18-e812-46f1-9377-792b9dd841c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.116007 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1031be18-e812-46f1-9377-792b9dd841c0" (UID: "1031be18-e812-46f1-9377-792b9dd841c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.182829 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7xm\" (UniqueName: \"kubernetes.io/projected/1031be18-e812-46f1-9377-792b9dd841c0-kube-api-access-xf7xm\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.182884 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.182895 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1031be18-e812-46f1-9377-792b9dd841c0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.518688 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" event={"ID":"1031be18-e812-46f1-9377-792b9dd841c0","Type":"ContainerDied","Data":"7adcc4a97fb95dbd35c45f0f2ebf3a8309f08a608082c02e8a6f2485415349b3"} Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.519214 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adcc4a97fb95dbd35c45f0f2ebf3a8309f08a608082c02e8a6f2485415349b3" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.518825 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8srhg" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.610291 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp"] Dec 03 09:42:30 crc kubenswrapper[4856]: E1203 09:42:30.610891 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1031be18-e812-46f1-9377-792b9dd841c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.610939 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1031be18-e812-46f1-9377-792b9dd841c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.611180 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1031be18-e812-46f1-9377-792b9dd841c0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.612080 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.615016 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.615501 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.615715 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.615901 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.631233 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp"] Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.696475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.696552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.696602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjf5j\" (UniqueName: \"kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.799138 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.799252 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjf5j\" (UniqueName: \"kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.801002 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.805343 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.805407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.817745 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjf5j\" (UniqueName: \"kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:30 crc kubenswrapper[4856]: I1203 09:42:30.935936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:31 crc kubenswrapper[4856]: I1203 09:42:31.500136 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp"] Dec 03 09:42:31 crc kubenswrapper[4856]: I1203 09:42:31.532759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" event={"ID":"6cdb1761-f836-42e1-a1d2-e52ccf41594b","Type":"ContainerStarted","Data":"875a190b2a6dc1ee4816994dac7571398460b661fa6197c58769af6e0a3443ea"} Dec 03 09:42:32 crc kubenswrapper[4856]: I1203 09:42:32.543262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" event={"ID":"6cdb1761-f836-42e1-a1d2-e52ccf41594b","Type":"ContainerStarted","Data":"17a23d5478d22936f71049128ad4244abebd45117683a196e75d4d7e0af877d1"} Dec 03 09:42:32 crc kubenswrapper[4856]: I1203 09:42:32.571460 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" podStartSLOduration=1.8720055740000001 podStartE2EDuration="2.571426154s" podCreationTimestamp="2025-12-03 09:42:30 +0000 UTC" firstStartedPulling="2025-12-03 09:42:31.506117698 +0000 UTC m=+1819.689009999" lastFinishedPulling="2025-12-03 09:42:32.205538278 +0000 UTC m=+1820.388430579" observedRunningTime="2025-12-03 09:42:32.561521513 +0000 UTC m=+1820.744413844" watchObservedRunningTime="2025-12-03 09:42:32.571426154 +0000 UTC m=+1820.754318455" Dec 03 09:42:33 crc kubenswrapper[4856]: I1203 09:42:33.689084 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:42:33 crc kubenswrapper[4856]: E1203 09:42:33.689898 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.599061 4856 generic.go:334] "Generic (PLEG): container finished" podID="6cdb1761-f836-42e1-a1d2-e52ccf41594b" containerID="17a23d5478d22936f71049128ad4244abebd45117683a196e75d4d7e0af877d1" exitCode=0 Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.599150 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" event={"ID":"6cdb1761-f836-42e1-a1d2-e52ccf41594b","Type":"ContainerDied","Data":"17a23d5478d22936f71049128ad4244abebd45117683a196e75d4d7e0af877d1"} Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.786905 4856 scope.go:117] "RemoveContainer" containerID="ec0cfd0dc6ab12c5648c443c38f05664f0f05ea234955cb92bc17bc9ac43a4c5" Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.832568 4856 scope.go:117] "RemoveContainer" containerID="2074d185adc5cd92b3be854bd05a72d8ee2cb15cddd3adead1ab6c9852b7e56d" Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.925512 4856 scope.go:117] "RemoveContainer" containerID="1671f5d17bf50cebd191032c03b62666e5e3f75392ac471607581f4adc3ac712" Dec 03 09:42:37 crc kubenswrapper[4856]: I1203 09:42:37.958212 4856 scope.go:117] "RemoveContainer" containerID="81827d51a71e64f28a01cd822e516ff1c4de082bf2b5cf3bfafa187a7d4e928f" Dec 03 09:42:38 crc kubenswrapper[4856]: I1203 09:42:38.018880 4856 scope.go:117] "RemoveContainer" containerID="10273cd0213c46785a8f01644cec36674da8a9554f7105ae8a4706492aabc37d" Dec 03 09:42:38 crc kubenswrapper[4856]: I1203 09:42:38.067165 4856 scope.go:117] "RemoveContainer" containerID="355d73a2dbf662a5f84176f02db1167cb13552968636f5b11f4318dc36b96639" Dec 03 09:42:38 crc kubenswrapper[4856]: I1203 09:42:38.119196 4856 scope.go:117] "RemoveContainer" containerID="0f4f35dc969e0914f86618d2e261c8e8be38de15f262d5f17ec2750caba29816" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.055773 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.141931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjf5j\" (UniqueName: \"kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j\") pod \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.142139 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key\") pod \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.142280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory\") pod \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\" (UID: \"6cdb1761-f836-42e1-a1d2-e52ccf41594b\") " Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.150376 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j" (OuterVolumeSpecName: "kube-api-access-wjf5j") pod "6cdb1761-f836-42e1-a1d2-e52ccf41594b" (UID: "6cdb1761-f836-42e1-a1d2-e52ccf41594b"). InnerVolumeSpecName "kube-api-access-wjf5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.177367 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory" (OuterVolumeSpecName: "inventory") pod "6cdb1761-f836-42e1-a1d2-e52ccf41594b" (UID: "6cdb1761-f836-42e1-a1d2-e52ccf41594b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.182631 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cdb1761-f836-42e1-a1d2-e52ccf41594b" (UID: "6cdb1761-f836-42e1-a1d2-e52ccf41594b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.246039 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.246120 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cdb1761-f836-42e1-a1d2-e52ccf41594b-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.246137 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjf5j\" (UniqueName: \"kubernetes.io/projected/6cdb1761-f836-42e1-a1d2-e52ccf41594b-kube-api-access-wjf5j\") on node \"crc\" DevicePath \"\"" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.621686 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" event={"ID":"6cdb1761-f836-42e1-a1d2-e52ccf41594b","Type":"ContainerDied","Data":"875a190b2a6dc1ee4816994dac7571398460b661fa6197c58769af6e0a3443ea"} Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.621760 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="875a190b2a6dc1ee4816994dac7571398460b661fa6197c58769af6e0a3443ea" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.621897 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.715205 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt"] Dec 03 09:42:39 crc kubenswrapper[4856]: E1203 09:42:39.716349 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdb1761-f836-42e1-a1d2-e52ccf41594b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.716377 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdb1761-f836-42e1-a1d2-e52ccf41594b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.716742 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdb1761-f836-42e1-a1d2-e52ccf41594b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.720131 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.724627 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.725144 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.725286 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.726582 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.736791 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt"] Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.759236 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.759378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qn69\" (UniqueName: \"kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.759535 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.863043 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.863273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.864795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qn69\" (UniqueName: \"kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.870555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.873716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:39 crc kubenswrapper[4856]: I1203 09:42:39.892575 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qn69\" (UniqueName: \"kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2v6dt\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:40 crc kubenswrapper[4856]: I1203 09:42:40.050769 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:42:40 crc kubenswrapper[4856]: I1203 09:42:40.588490 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt"] Dec 03 09:42:40 crc kubenswrapper[4856]: I1203 09:42:40.634978 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" event={"ID":"eed4d3c5-8f3d-4fbb-8eb9-197302785490","Type":"ContainerStarted","Data":"a568065bbf5eea3dd101a698d7e3056e548264d3a9b321aedd5590dc1d92710b"} Dec 03 09:42:41 crc kubenswrapper[4856]: I1203 09:42:41.648107 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" event={"ID":"eed4d3c5-8f3d-4fbb-8eb9-197302785490","Type":"ContainerStarted","Data":"d540e9b74f31ae95636049c1f972870882e70c2d1c5e2e8340e52d0154e6e90f"} Dec 03 09:42:48 crc kubenswrapper[4856]: I1203 09:42:48.689969 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:42:48 crc kubenswrapper[4856]: E1203 09:42:48.691205 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.058223 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" podStartSLOduration=12.60913582 podStartE2EDuration="13.058181719s" podCreationTimestamp="2025-12-03 09:42:39 +0000 UTC" firstStartedPulling="2025-12-03 09:42:40.597623368 +0000 UTC m=+1828.780515669" lastFinishedPulling="2025-12-03 09:42:41.046669267 +0000 UTC m=+1829.229561568" observedRunningTime="2025-12-03 09:42:41.671517952 +0000 UTC m=+1829.854410253" watchObservedRunningTime="2025-12-03 09:42:52.058181719 +0000 UTC m=+1840.241074030" Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.088280 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5sgp5"] Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.105643 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smxjg"] Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.116593 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5sgp5"] Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.126915 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smxjg"] Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.703715 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb4a4f4-8246-453d-a39d-db70774f8e5b" path="/var/lib/kubelet/pods/6bb4a4f4-8246-453d-a39d-db70774f8e5b/volumes" Dec 03 09:42:52 crc kubenswrapper[4856]: I1203 09:42:52.705193 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf2dec0-9031-43e3-8f9f-90b36ce4b786" path="/var/lib/kubelet/pods/7cf2dec0-9031-43e3-8f9f-90b36ce4b786/volumes" Dec 03 09:42:59 crc kubenswrapper[4856]: I1203 09:42:59.689476 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:42:59 crc kubenswrapper[4856]: E1203 09:42:59.690720 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:43:14 crc kubenswrapper[4856]: I1203 09:43:14.690226 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:43:14 crc kubenswrapper[4856]: E1203 09:43:14.691418 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:43:22 crc kubenswrapper[4856]: I1203 09:43:22.078208 4856 generic.go:334] "Generic (PLEG): container finished" podID="eed4d3c5-8f3d-4fbb-8eb9-197302785490" containerID="d540e9b74f31ae95636049c1f972870882e70c2d1c5e2e8340e52d0154e6e90f" exitCode=0 Dec 03 09:43:22 crc kubenswrapper[4856]: I1203 09:43:22.078377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" event={"ID":"eed4d3c5-8f3d-4fbb-8eb9-197302785490","Type":"ContainerDied","Data":"d540e9b74f31ae95636049c1f972870882e70c2d1c5e2e8340e52d0154e6e90f"} Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.556072 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.740204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key\") pod \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.740268 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory\") pod \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.740328 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qn69\" (UniqueName: \"kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69\") pod \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\" (UID: \"eed4d3c5-8f3d-4fbb-8eb9-197302785490\") " Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.753317 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69" (OuterVolumeSpecName: "kube-api-access-9qn69") pod "eed4d3c5-8f3d-4fbb-8eb9-197302785490" (UID: "eed4d3c5-8f3d-4fbb-8eb9-197302785490"). InnerVolumeSpecName "kube-api-access-9qn69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.773410 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory" (OuterVolumeSpecName: "inventory") pod "eed4d3c5-8f3d-4fbb-8eb9-197302785490" (UID: "eed4d3c5-8f3d-4fbb-8eb9-197302785490"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.773976 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eed4d3c5-8f3d-4fbb-8eb9-197302785490" (UID: "eed4d3c5-8f3d-4fbb-8eb9-197302785490"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.843432 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.843473 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eed4d3c5-8f3d-4fbb-8eb9-197302785490-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:43:23 crc kubenswrapper[4856]: I1203 09:43:23.844310 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qn69\" (UniqueName: \"kubernetes.io/projected/eed4d3c5-8f3d-4fbb-8eb9-197302785490-kube-api-access-9qn69\") on node \"crc\" DevicePath \"\"" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.102093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" event={"ID":"eed4d3c5-8f3d-4fbb-8eb9-197302785490","Type":"ContainerDied","Data":"a568065bbf5eea3dd101a698d7e3056e548264d3a9b321aedd5590dc1d92710b"} Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.102155 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a568065bbf5eea3dd101a698d7e3056e548264d3a9b321aedd5590dc1d92710b" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.102158 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2v6dt" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.214821 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh"] Dec 03 09:43:24 crc kubenswrapper[4856]: E1203 09:43:24.215461 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed4d3c5-8f3d-4fbb-8eb9-197302785490" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.215491 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed4d3c5-8f3d-4fbb-8eb9-197302785490" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.215746 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed4d3c5-8f3d-4fbb-8eb9-197302785490" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.216872 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.223111 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.223199 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.223479 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.223683 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.238164 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh"] Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.359108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glc49\" (UniqueName: \"kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.359261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.359327 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.461162 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.461269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.461347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glc49\" (UniqueName: \"kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.467687 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.471337 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.481360 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glc49\" (UniqueName: \"kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:24 crc kubenswrapper[4856]: I1203 09:43:24.548362 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:43:25 crc kubenswrapper[4856]: I1203 09:43:25.102020 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh"] Dec 03 09:43:26 crc kubenswrapper[4856]: I1203 09:43:26.131899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" event={"ID":"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c","Type":"ContainerStarted","Data":"00cdd84f606cbccef58f6a014334fc9086d695d6a2bedd75aa1fd6de6aac4f6b"} Dec 03 09:43:26 crc kubenswrapper[4856]: I1203 09:43:26.132466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" event={"ID":"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c","Type":"ContainerStarted","Data":"f7d1419c2e5675584613f5543dac5b22f1d26a406d3ac76c663c33c111539728"} Dec 03 09:43:26 crc kubenswrapper[4856]: I1203 09:43:26.160240 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" podStartSLOduration=1.676351747 podStartE2EDuration="2.160210621s" podCreationTimestamp="2025-12-03 09:43:24 +0000 UTC" firstStartedPulling="2025-12-03 09:43:25.110277545 +0000 UTC m=+1873.293169846" lastFinishedPulling="2025-12-03 09:43:25.594136419 +0000 UTC m=+1873.777028720" observedRunningTime="2025-12-03 09:43:26.150797542 +0000 UTC m=+1874.333689863" watchObservedRunningTime="2025-12-03 09:43:26.160210621 +0000 UTC m=+1874.343102922" Dec 03 09:43:29 crc kubenswrapper[4856]: I1203 09:43:29.689246 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:43:29 crc kubenswrapper[4856]: E1203 09:43:29.690679 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:43:38 crc kubenswrapper[4856]: I1203 09:43:38.047588 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tz5lq"] Dec 03 09:43:38 crc kubenswrapper[4856]: I1203 09:43:38.056898 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tz5lq"] Dec 03 09:43:38 crc kubenswrapper[4856]: I1203 09:43:38.370144 4856 scope.go:117] "RemoveContainer" containerID="26df00eb15714b6f4720d01cf7aa8ff48fd215be3d91fadcfb73bd3db75ea863" Dec 03 09:43:38 crc kubenswrapper[4856]: I1203 09:43:38.446488 4856 scope.go:117] "RemoveContainer" containerID="a522ca6f0c630c607e4528e539c3a411906e150e45b2a4f574b26f27e734d9fc" Dec 03 09:43:38 crc kubenswrapper[4856]: I1203 09:43:38.703979 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe1f0d9-6c77-4dab-b894-40c419c10324" path="/var/lib/kubelet/pods/2fe1f0d9-6c77-4dab-b894-40c419c10324/volumes" Dec 03 09:43:42 crc kubenswrapper[4856]: I1203 09:43:42.690045 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:43:42 crc kubenswrapper[4856]: E1203 09:43:42.691535 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.163982 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.167212 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.172872 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.253425 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.253572 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzrb\" (UniqueName: \"kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.253615 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.356256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chzrb\" (UniqueName: \"kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.356731 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.357019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.357741 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.357760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.385518 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzrb\" (UniqueName: \"kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb\") pod \"redhat-operators-lwpm6\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:44 crc kubenswrapper[4856]: I1203 09:43:44.497740 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:45 crc kubenswrapper[4856]: I1203 09:43:45.159962 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:43:45 crc kubenswrapper[4856]: W1203 09:43:45.161430 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7783c4fd_e1f3_4f3f_aea7_252bb07b9856.slice/crio-fad1aea934247b3ddb3f4aad9b9415727daa5162c53f63263b912083383da116 WatchSource:0}: Error finding container fad1aea934247b3ddb3f4aad9b9415727daa5162c53f63263b912083383da116: Status 404 returned error can't find the container with id fad1aea934247b3ddb3f4aad9b9415727daa5162c53f63263b912083383da116 Dec 03 09:43:45 crc kubenswrapper[4856]: I1203 09:43:45.384149 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerStarted","Data":"fad1aea934247b3ddb3f4aad9b9415727daa5162c53f63263b912083383da116"} Dec 03 09:43:46 crc kubenswrapper[4856]: I1203 09:43:46.397335 4856 generic.go:334] "Generic (PLEG): container finished" podID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerID="9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7" exitCode=0 Dec 03 09:43:46 crc kubenswrapper[4856]: I1203 09:43:46.397414 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerDied","Data":"9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7"} Dec 03 09:43:48 crc kubenswrapper[4856]: I1203 09:43:48.421443 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerStarted","Data":"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7"} Dec 03 09:43:49 crc kubenswrapper[4856]: I1203 09:43:49.434632 4856 generic.go:334] "Generic (PLEG): container finished" podID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerID="16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7" exitCode=0 Dec 03 09:43:49 crc kubenswrapper[4856]: I1203 09:43:49.434700 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerDied","Data":"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7"} Dec 03 09:43:49 crc kubenswrapper[4856]: I1203 09:43:49.438206 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:43:52 crc kubenswrapper[4856]: I1203 09:43:52.175116 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerStarted","Data":"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55"} Dec 03 09:43:54 crc kubenswrapper[4856]: I1203 09:43:54.498769 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:54 crc kubenswrapper[4856]: I1203 09:43:54.498881 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:43:55 crc kubenswrapper[4856]: I1203 09:43:55.581176 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lwpm6" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="registry-server" probeResult="failure" output=< Dec 03 09:43:55 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Dec 03 09:43:55 crc kubenswrapper[4856]: > Dec 03 09:43:57 crc kubenswrapper[4856]: I1203 09:43:57.690109 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:43:57 crc kubenswrapper[4856]: E1203 09:43:57.690509 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:04 crc kubenswrapper[4856]: I1203 09:44:04.552639 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:44:04 crc kubenswrapper[4856]: I1203 09:44:04.587394 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lwpm6" podStartSLOduration=17.004289264 podStartE2EDuration="20.587360821s" podCreationTimestamp="2025-12-03 09:43:44 +0000 UTC" firstStartedPulling="2025-12-03 09:43:46.400069901 +0000 UTC m=+1894.582962212" lastFinishedPulling="2025-12-03 09:43:49.983141468 +0000 UTC m=+1898.166033769" observedRunningTime="2025-12-03 09:43:52.198661418 +0000 UTC m=+1900.381553719" watchObservedRunningTime="2025-12-03 09:44:04.587360821 +0000 UTC m=+1912.770253132" Dec 03 09:44:04 crc kubenswrapper[4856]: I1203 09:44:04.608736 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:44:04 crc kubenswrapper[4856]: I1203 09:44:04.793958 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.328467 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lwpm6" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="registry-server" containerID="cri-o://2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55" gracePeriod=2 Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.851735 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.915324 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chzrb\" (UniqueName: \"kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb\") pod \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.915914 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities\") pod \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.916136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content\") pod \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\" (UID: \"7783c4fd-e1f3-4f3f-aea7-252bb07b9856\") " Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.916729 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities" (OuterVolumeSpecName: "utilities") pod "7783c4fd-e1f3-4f3f-aea7-252bb07b9856" (UID: "7783c4fd-e1f3-4f3f-aea7-252bb07b9856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.917555 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:06 crc kubenswrapper[4856]: I1203 09:44:06.923065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb" (OuterVolumeSpecName: "kube-api-access-chzrb") pod "7783c4fd-e1f3-4f3f-aea7-252bb07b9856" (UID: "7783c4fd-e1f3-4f3f-aea7-252bb07b9856"). InnerVolumeSpecName "kube-api-access-chzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.019892 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chzrb\" (UniqueName: \"kubernetes.io/projected/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-kube-api-access-chzrb\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.029437 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7783c4fd-e1f3-4f3f-aea7-252bb07b9856" (UID: "7783c4fd-e1f3-4f3f-aea7-252bb07b9856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.122082 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7783c4fd-e1f3-4f3f-aea7-252bb07b9856-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.359947 4856 generic.go:334] "Generic (PLEG): container finished" podID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerID="2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55" exitCode=0 Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.360090 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwpm6" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.360080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerDied","Data":"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55"} Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.360904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwpm6" event={"ID":"7783c4fd-e1f3-4f3f-aea7-252bb07b9856","Type":"ContainerDied","Data":"fad1aea934247b3ddb3f4aad9b9415727daa5162c53f63263b912083383da116"} Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.360969 4856 scope.go:117] "RemoveContainer" containerID="2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.387436 4856 scope.go:117] "RemoveContainer" containerID="16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.415513 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.422358 4856 scope.go:117] "RemoveContainer" containerID="9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.425346 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lwpm6"] Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.488780 4856 scope.go:117] "RemoveContainer" containerID="2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55" Dec 03 09:44:07 crc kubenswrapper[4856]: E1203 09:44:07.489489 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55\": container with ID starting with 2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55 not found: ID does not exist" containerID="2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.489559 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55"} err="failed to get container status \"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55\": rpc error: code = NotFound desc = could not find container \"2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55\": container with ID starting with 2af9fdc20286b698d968b5c5d4427e965feea299745bb7afc52cd3d939c61b55 not found: ID does not exist" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.489612 4856 scope.go:117] "RemoveContainer" containerID="16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7" Dec 03 09:44:07 crc kubenswrapper[4856]: E1203 09:44:07.490244 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7\": container with ID starting with 16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7 not found: ID does not exist" containerID="16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.490325 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7"} err="failed to get container status \"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7\": rpc error: code = NotFound desc = could not find container \"16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7\": container with ID starting with 16c655a6357ffd3674f3572eaaaa6d2e710f20791e79878c0753d2e37a671ec7 not found: ID does not exist" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.490386 4856 scope.go:117] "RemoveContainer" containerID="9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7" Dec 03 09:44:07 crc kubenswrapper[4856]: E1203 09:44:07.490902 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7\": container with ID starting with 9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7 not found: ID does not exist" containerID="9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7" Dec 03 09:44:07 crc kubenswrapper[4856]: I1203 09:44:07.490935 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7"} err="failed to get container status \"9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7\": rpc error: code = NotFound desc = could not find container \"9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7\": container with ID starting with 9ab6c2b0da3739a509a44597a0bc5e412753f36d6fb9a6fcbc3903d32d9210d7 not found: ID does not exist" Dec 03 09:44:08 crc kubenswrapper[4856]: I1203 09:44:08.710572 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" path="/var/lib/kubelet/pods/7783c4fd-e1f3-4f3f-aea7-252bb07b9856/volumes" Dec 03 09:44:09 crc kubenswrapper[4856]: I1203 09:44:09.689249 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:44:09 crc kubenswrapper[4856]: E1203 09:44:09.689591 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:19 crc kubenswrapper[4856]: I1203 09:44:19.494015 4856 generic.go:334] "Generic (PLEG): container finished" podID="a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" containerID="00cdd84f606cbccef58f6a014334fc9086d695d6a2bedd75aa1fd6de6aac4f6b" exitCode=0 Dec 03 09:44:19 crc kubenswrapper[4856]: I1203 09:44:19.494077 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" event={"ID":"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c","Type":"ContainerDied","Data":"00cdd84f606cbccef58f6a014334fc9086d695d6a2bedd75aa1fd6de6aac4f6b"} Dec 03 09:44:20 crc kubenswrapper[4856]: I1203 09:44:20.954583 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:44:20 crc kubenswrapper[4856]: I1203 09:44:20.971623 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory\") pod \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " Dec 03 09:44:20 crc kubenswrapper[4856]: I1203 09:44:20.971717 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key\") pod \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " Dec 03 09:44:20 crc kubenswrapper[4856]: I1203 09:44:20.971919 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glc49\" (UniqueName: \"kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49\") pod \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\" (UID: \"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c\") " Dec 03 09:44:20 crc kubenswrapper[4856]: I1203 09:44:20.993009 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49" (OuterVolumeSpecName: "kube-api-access-glc49") pod "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" (UID: "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c"). InnerVolumeSpecName "kube-api-access-glc49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.024988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" (UID: "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.027023 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory" (OuterVolumeSpecName: "inventory") pod "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" (UID: "a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.078058 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glc49\" (UniqueName: \"kubernetes.io/projected/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-kube-api-access-glc49\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.078890 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.079031 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.520479 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" event={"ID":"a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c","Type":"ContainerDied","Data":"f7d1419c2e5675584613f5543dac5b22f1d26a406d3ac76c663c33c111539728"} Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.520542 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7d1419c2e5675584613f5543dac5b22f1d26a406d3ac76c663c33c111539728" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.521017 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.621014 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6k8w6"] Dec 03 09:44:21 crc kubenswrapper[4856]: E1203 09:44:21.621669 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="extract-content" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.621689 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="extract-content" Dec 03 09:44:21 crc kubenswrapper[4856]: E1203 09:44:21.621727 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="extract-utilities" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.621735 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="extract-utilities" Dec 03 09:44:21 crc kubenswrapper[4856]: E1203 09:44:21.621747 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="registry-server" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.621755 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="registry-server" Dec 03 09:44:21 crc kubenswrapper[4856]: E1203 09:44:21.621780 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.621790 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.622031 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7783c4fd-e1f3-4f3f-aea7-252bb07b9856" containerName="registry-server" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.622046 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.622956 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.628651 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.628846 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.629259 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.629417 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.635102 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6k8w6"] Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.692669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.693138 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.693326 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg8x\" (UniqueName: \"kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.796405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.797150 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.797515 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qg8x\" (UniqueName: \"kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.802644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.802785 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.818544 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qg8x\" (UniqueName: \"kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x\") pod \"ssh-known-hosts-edpm-deployment-6k8w6\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:21 crc kubenswrapper[4856]: I1203 09:44:21.946854 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:22 crc kubenswrapper[4856]: I1203 09:44:22.542556 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6k8w6"] Dec 03 09:44:23 crc kubenswrapper[4856]: I1203 09:44:23.548872 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" event={"ID":"20a18377-44d6-4f4e-b2a4-24470b9bf24e","Type":"ContainerStarted","Data":"c147de31c63dfe83ffe626678e87c44725ec94a7a1637534f988abb68e7bf6bb"} Dec 03 09:44:23 crc kubenswrapper[4856]: I1203 09:44:23.549495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" event={"ID":"20a18377-44d6-4f4e-b2a4-24470b9bf24e","Type":"ContainerStarted","Data":"23a30b927ceabc79a937f336e77f9981bf4fbe78b09efc472471a1976d15f1b4"} Dec 03 09:44:23 crc kubenswrapper[4856]: I1203 09:44:23.690407 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:44:23 crc kubenswrapper[4856]: E1203 09:44:23.690911 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:24 crc kubenswrapper[4856]: I1203 09:44:24.575694 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" podStartSLOduration=3.002286562 podStartE2EDuration="3.57566077s" podCreationTimestamp="2025-12-03 09:44:21 +0000 UTC" firstStartedPulling="2025-12-03 09:44:22.550487535 +0000 UTC m=+1930.733379836" lastFinishedPulling="2025-12-03 09:44:23.123861743 +0000 UTC m=+1931.306754044" observedRunningTime="2025-12-03 09:44:24.573002132 +0000 UTC m=+1932.755894453" watchObservedRunningTime="2025-12-03 09:44:24.57566077 +0000 UTC m=+1932.758553071" Dec 03 09:44:31 crc kubenswrapper[4856]: I1203 09:44:31.650311 4856 generic.go:334] "Generic (PLEG): container finished" podID="20a18377-44d6-4f4e-b2a4-24470b9bf24e" containerID="c147de31c63dfe83ffe626678e87c44725ec94a7a1637534f988abb68e7bf6bb" exitCode=0 Dec 03 09:44:31 crc kubenswrapper[4856]: I1203 09:44:31.650439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" event={"ID":"20a18377-44d6-4f4e-b2a4-24470b9bf24e","Type":"ContainerDied","Data":"c147de31c63dfe83ffe626678e87c44725ec94a7a1637534f988abb68e7bf6bb"} Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.224054 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.397552 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") pod \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.398234 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam\") pod \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.398402 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qg8x\" (UniqueName: \"kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x\") pod \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.405315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x" (OuterVolumeSpecName: "kube-api-access-2qg8x") pod "20a18377-44d6-4f4e-b2a4-24470b9bf24e" (UID: "20a18377-44d6-4f4e-b2a4-24470b9bf24e"). InnerVolumeSpecName "kube-api-access-2qg8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:44:33 crc kubenswrapper[4856]: E1203 09:44:33.429962 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0 podName:20a18377-44d6-4f4e-b2a4-24470b9bf24e nodeName:}" failed. No retries permitted until 2025-12-03 09:44:33.929916802 +0000 UTC m=+1942.112809113 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory-0" (UniqueName: "kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0") pod "20a18377-44d6-4f4e-b2a4-24470b9bf24e" (UID: "20a18377-44d6-4f4e-b2a4-24470b9bf24e") : error deleting /var/lib/kubelet/pods/20a18377-44d6-4f4e-b2a4-24470b9bf24e/volume-subpaths: remove /var/lib/kubelet/pods/20a18377-44d6-4f4e-b2a4-24470b9bf24e/volume-subpaths: no such file or directory Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.434786 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "20a18377-44d6-4f4e-b2a4-24470b9bf24e" (UID: "20a18377-44d6-4f4e-b2a4-24470b9bf24e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.501143 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qg8x\" (UniqueName: \"kubernetes.io/projected/20a18377-44d6-4f4e-b2a4-24470b9bf24e-kube-api-access-2qg8x\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.501191 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.672724 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" event={"ID":"20a18377-44d6-4f4e-b2a4-24470b9bf24e","Type":"ContainerDied","Data":"23a30b927ceabc79a937f336e77f9981bf4fbe78b09efc472471a1976d15f1b4"} Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.672819 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6k8w6" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.672799 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a30b927ceabc79a937f336e77f9981bf4fbe78b09efc472471a1976d15f1b4" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.765615 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f"] Dec 03 09:44:33 crc kubenswrapper[4856]: E1203 09:44:33.767828 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a18377-44d6-4f4e-b2a4-24470b9bf24e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.767964 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a18377-44d6-4f4e-b2a4-24470b9bf24e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.768306 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a18377-44d6-4f4e-b2a4-24470b9bf24e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.769390 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.793650 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f"] Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.909689 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shmh7\" (UniqueName: \"kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.910096 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:33 crc kubenswrapper[4856]: I1203 09:44:33.910258 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.011706 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") pod \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\" (UID: \"20a18377-44d6-4f4e-b2a4-24470b9bf24e\") " Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.012439 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.012518 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.012642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shmh7\" (UniqueName: \"kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.015354 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "20a18377-44d6-4f4e-b2a4-24470b9bf24e" (UID: "20a18377-44d6-4f4e-b2a4-24470b9bf24e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.017615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.018022 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.045277 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shmh7\" (UniqueName: \"kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ksn7f\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.091089 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.115843 4856 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20a18377-44d6-4f4e-b2a4-24470b9bf24e-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:34 crc kubenswrapper[4856]: I1203 09:44:34.724525 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f"] Dec 03 09:44:35 crc kubenswrapper[4856]: I1203 09:44:35.689476 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:44:35 crc kubenswrapper[4856]: E1203 09:44:35.690329 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:35 crc kubenswrapper[4856]: I1203 09:44:35.697153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" event={"ID":"2cb5d44f-36f7-4bf5-b688-ed331f254afd","Type":"ContainerStarted","Data":"d399bbb3c72ab9ad916b92a468330cc9b334175e23d8259bc1bd0f7aab85322a"} Dec 03 09:44:35 crc kubenswrapper[4856]: I1203 09:44:35.697227 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" event={"ID":"2cb5d44f-36f7-4bf5-b688-ed331f254afd","Type":"ContainerStarted","Data":"7bf7bf4b0c76ccfce1115a1e4942b82f5c68d3a59abd8219561aa3265aa79fc5"} Dec 03 09:44:35 crc kubenswrapper[4856]: I1203 09:44:35.724230 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" podStartSLOduration=2.286188115 podStartE2EDuration="2.724201704s" podCreationTimestamp="2025-12-03 09:44:33 +0000 UTC" firstStartedPulling="2025-12-03 09:44:34.734047397 +0000 UTC m=+1942.916939698" lastFinishedPulling="2025-12-03 09:44:35.172060986 +0000 UTC m=+1943.354953287" observedRunningTime="2025-12-03 09:44:35.721442154 +0000 UTC m=+1943.904334465" watchObservedRunningTime="2025-12-03 09:44:35.724201704 +0000 UTC m=+1943.907093995" Dec 03 09:44:38 crc kubenswrapper[4856]: I1203 09:44:38.562888 4856 scope.go:117] "RemoveContainer" containerID="9bb171008f694d76e4b57a89d46ae637f59c5e443e24e8d1ed158e0dcfbfc77e" Dec 03 09:44:43 crc kubenswrapper[4856]: I1203 09:44:43.802513 4856 generic.go:334] "Generic (PLEG): container finished" podID="2cb5d44f-36f7-4bf5-b688-ed331f254afd" containerID="d399bbb3c72ab9ad916b92a468330cc9b334175e23d8259bc1bd0f7aab85322a" exitCode=0 Dec 03 09:44:43 crc kubenswrapper[4856]: I1203 09:44:43.802700 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" event={"ID":"2cb5d44f-36f7-4bf5-b688-ed331f254afd","Type":"ContainerDied","Data":"d399bbb3c72ab9ad916b92a468330cc9b334175e23d8259bc1bd0f7aab85322a"} Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.262350 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.401511 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shmh7\" (UniqueName: \"kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7\") pod \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.401858 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory\") pod \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.402099 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key\") pod \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\" (UID: \"2cb5d44f-36f7-4bf5-b688-ed331f254afd\") " Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.410088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7" (OuterVolumeSpecName: "kube-api-access-shmh7") pod "2cb5d44f-36f7-4bf5-b688-ed331f254afd" (UID: "2cb5d44f-36f7-4bf5-b688-ed331f254afd"). InnerVolumeSpecName "kube-api-access-shmh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.436337 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2cb5d44f-36f7-4bf5-b688-ed331f254afd" (UID: "2cb5d44f-36f7-4bf5-b688-ed331f254afd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.437844 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory" (OuterVolumeSpecName: "inventory") pod "2cb5d44f-36f7-4bf5-b688-ed331f254afd" (UID: "2cb5d44f-36f7-4bf5-b688-ed331f254afd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.505826 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shmh7\" (UniqueName: \"kubernetes.io/projected/2cb5d44f-36f7-4bf5-b688-ed331f254afd-kube-api-access-shmh7\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.506189 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.506259 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cb5d44f-36f7-4bf5-b688-ed331f254afd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.826304 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" event={"ID":"2cb5d44f-36f7-4bf5-b688-ed331f254afd","Type":"ContainerDied","Data":"7bf7bf4b0c76ccfce1115a1e4942b82f5c68d3a59abd8219561aa3265aa79fc5"} Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.826370 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf7bf4b0c76ccfce1115a1e4942b82f5c68d3a59abd8219561aa3265aa79fc5" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.826818 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ksn7f" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.959190 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq"] Dec 03 09:44:45 crc kubenswrapper[4856]: E1203 09:44:45.960230 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb5d44f-36f7-4bf5-b688-ed331f254afd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.960254 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb5d44f-36f7-4bf5-b688-ed331f254afd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.960816 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb5d44f-36f7-4bf5-b688-ed331f254afd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.962073 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.969043 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.969305 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.969560 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.969721 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:44:45 crc kubenswrapper[4856]: I1203 09:44:45.987061 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq"] Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.121843 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.122063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncpz\" (UniqueName: \"kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.122178 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.225413 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.226111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncpz\" (UniqueName: \"kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.226182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.236213 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.238578 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.245951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncpz\" (UniqueName: \"kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.330489 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.690001 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:44:46 crc kubenswrapper[4856]: E1203 09:44:46.690942 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:46 crc kubenswrapper[4856]: I1203 09:44:46.878671 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq"] Dec 03 09:44:47 crc kubenswrapper[4856]: I1203 09:44:47.850654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" event={"ID":"161b93b3-c4c6-4f99-a419-f00bed34b046","Type":"ContainerStarted","Data":"be8638c0a9ff24bd76c08faee000a8815f3783a60b37d0a4ea2b138d78267e37"} Dec 03 09:44:47 crc kubenswrapper[4856]: I1203 09:44:47.851964 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" event={"ID":"161b93b3-c4c6-4f99-a419-f00bed34b046","Type":"ContainerStarted","Data":"eee7d2fa9de4c9752d3e015993ba121209e3f895102540840d07333ff05cd877"} Dec 03 09:44:47 crc kubenswrapper[4856]: I1203 09:44:47.872261 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" podStartSLOduration=2.415018505 podStartE2EDuration="2.872226882s" podCreationTimestamp="2025-12-03 09:44:45 +0000 UTC" firstStartedPulling="2025-12-03 09:44:46.887065351 +0000 UTC m=+1955.069957652" lastFinishedPulling="2025-12-03 09:44:47.344273728 +0000 UTC m=+1955.527166029" observedRunningTime="2025-12-03 09:44:47.867567013 +0000 UTC m=+1956.050459344" watchObservedRunningTime="2025-12-03 09:44:47.872226882 +0000 UTC m=+1956.055119193" Dec 03 09:44:58 crc kubenswrapper[4856]: I1203 09:44:58.274588 4856 generic.go:334] "Generic (PLEG): container finished" podID="161b93b3-c4c6-4f99-a419-f00bed34b046" containerID="be8638c0a9ff24bd76c08faee000a8815f3783a60b37d0a4ea2b138d78267e37" exitCode=0 Dec 03 09:44:58 crc kubenswrapper[4856]: I1203 09:44:58.274683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" event={"ID":"161b93b3-c4c6-4f99-a419-f00bed34b046","Type":"ContainerDied","Data":"be8638c0a9ff24bd76c08faee000a8815f3783a60b37d0a4ea2b138d78267e37"} Dec 03 09:44:58 crc kubenswrapper[4856]: I1203 09:44:58.689892 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:44:58 crc kubenswrapper[4856]: E1203 09:44:58.691041 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.785558 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.910739 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wncpz\" (UniqueName: \"kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz\") pod \"161b93b3-c4c6-4f99-a419-f00bed34b046\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.910877 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key\") pod \"161b93b3-c4c6-4f99-a419-f00bed34b046\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.911035 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory\") pod \"161b93b3-c4c6-4f99-a419-f00bed34b046\" (UID: \"161b93b3-c4c6-4f99-a419-f00bed34b046\") " Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.918555 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz" (OuterVolumeSpecName: "kube-api-access-wncpz") pod "161b93b3-c4c6-4f99-a419-f00bed34b046" (UID: "161b93b3-c4c6-4f99-a419-f00bed34b046"). InnerVolumeSpecName "kube-api-access-wncpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.948720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory" (OuterVolumeSpecName: "inventory") pod "161b93b3-c4c6-4f99-a419-f00bed34b046" (UID: "161b93b3-c4c6-4f99-a419-f00bed34b046"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:44:59 crc kubenswrapper[4856]: I1203 09:44:59.958315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "161b93b3-c4c6-4f99-a419-f00bed34b046" (UID: "161b93b3-c4c6-4f99-a419-f00bed34b046"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.014842 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.015226 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wncpz\" (UniqueName: \"kubernetes.io/projected/161b93b3-c4c6-4f99-a419-f00bed34b046-kube-api-access-wncpz\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.015312 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/161b93b3-c4c6-4f99-a419-f00bed34b046-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.144502 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn"] Dec 03 09:45:00 crc kubenswrapper[4856]: E1203 09:45:00.145173 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b93b3-c4c6-4f99-a419-f00bed34b046" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.145206 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b93b3-c4c6-4f99-a419-f00bed34b046" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.145455 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="161b93b3-c4c6-4f99-a419-f00bed34b046" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.146406 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.149366 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.149430 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.159351 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn"] Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.222918 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.223091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4mt\" (UniqueName: \"kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.223252 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.300511 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" event={"ID":"161b93b3-c4c6-4f99-a419-f00bed34b046","Type":"ContainerDied","Data":"eee7d2fa9de4c9752d3e015993ba121209e3f895102540840d07333ff05cd877"} Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.300566 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee7d2fa9de4c9752d3e015993ba121209e3f895102540840d07333ff05cd877" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.300622 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.325927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.326035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4mt\" (UniqueName: \"kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.326200 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.327573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.336644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.345498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4mt\" (UniqueName: \"kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt\") pod \"collect-profiles-29412585-vnfmn\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.435881 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6"] Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.437685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442040 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442109 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442207 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442344 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442432 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442645 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.442735 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.443215 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.461670 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6"] Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.483481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.530993 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531096 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531377 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531728 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.531957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532118 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532338 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532434 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfpx9\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.532824 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635084 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635168 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635200 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635238 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635287 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635476 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635519 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635573 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635620 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfpx9\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.635763 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.641992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.643557 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.645864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.646078 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.647068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.647432 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.647548 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.647777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.647989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.648482 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.648491 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.648729 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.649534 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.663131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfpx9\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:00 crc kubenswrapper[4856]: I1203 09:45:00.760334 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.002006 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn"] Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.188556 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6"] Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.314934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" event={"ID":"77973943-428f-4578-91c1-ed94f2616c7e","Type":"ContainerStarted","Data":"67d5763f4c0ed829978956401ab14278485a26d33e2028ed635e8f6d2953f4ad"} Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.317871 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" event={"ID":"adf16af8-86d8-4936-bdbe-0dab0602f51d","Type":"ContainerStarted","Data":"18ede4f71010bf60aab87cacdf2c51eb1dd4ee5cdeed6de2128a5521b4ba05b5"} Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.317953 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" event={"ID":"adf16af8-86d8-4936-bdbe-0dab0602f51d","Type":"ContainerStarted","Data":"0074aedeb14ead9f6be809803b17f11c14f7950a7d49315901664f47ae4db6c9"} Dec 03 09:45:01 crc kubenswrapper[4856]: I1203 09:45:01.339619 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" podStartSLOduration=1.339593221 podStartE2EDuration="1.339593221s" podCreationTimestamp="2025-12-03 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 09:45:01.335967439 +0000 UTC m=+1969.518859760" watchObservedRunningTime="2025-12-03 09:45:01.339593221 +0000 UTC m=+1969.522485522" Dec 03 09:45:02 crc kubenswrapper[4856]: I1203 09:45:02.332087 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" event={"ID":"77973943-428f-4578-91c1-ed94f2616c7e","Type":"ContainerStarted","Data":"034cbc134ab59a5683a04bbc71aea047cfbc31ffe58b6ffee37c78f5d9f15e5c"} Dec 03 09:45:02 crc kubenswrapper[4856]: I1203 09:45:02.397546 4856 generic.go:334] "Generic (PLEG): container finished" podID="adf16af8-86d8-4936-bdbe-0dab0602f51d" containerID="18ede4f71010bf60aab87cacdf2c51eb1dd4ee5cdeed6de2128a5521b4ba05b5" exitCode=0 Dec 03 09:45:02 crc kubenswrapper[4856]: I1203 09:45:02.397638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" event={"ID":"adf16af8-86d8-4936-bdbe-0dab0602f51d","Type":"ContainerDied","Data":"18ede4f71010bf60aab87cacdf2c51eb1dd4ee5cdeed6de2128a5521b4ba05b5"} Dec 03 09:45:02 crc kubenswrapper[4856]: I1203 09:45:02.419542 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" podStartSLOduration=1.85119854 podStartE2EDuration="2.419499589s" podCreationTimestamp="2025-12-03 09:45:00 +0000 UTC" firstStartedPulling="2025-12-03 09:45:01.206717885 +0000 UTC m=+1969.389610186" lastFinishedPulling="2025-12-03 09:45:01.775018934 +0000 UTC m=+1969.957911235" observedRunningTime="2025-12-03 09:45:02.405120564 +0000 UTC m=+1970.588012885" watchObservedRunningTime="2025-12-03 09:45:02.419499589 +0000 UTC m=+1970.602391890" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.785099 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.816592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume\") pod \"adf16af8-86d8-4936-bdbe-0dab0602f51d\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.816750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4mt\" (UniqueName: \"kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt\") pod \"adf16af8-86d8-4936-bdbe-0dab0602f51d\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.816828 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume\") pod \"adf16af8-86d8-4936-bdbe-0dab0602f51d\" (UID: \"adf16af8-86d8-4936-bdbe-0dab0602f51d\") " Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.818589 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume" (OuterVolumeSpecName: "config-volume") pod "adf16af8-86d8-4936-bdbe-0dab0602f51d" (UID: "adf16af8-86d8-4936-bdbe-0dab0602f51d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.864417 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt" (OuterVolumeSpecName: "kube-api-access-rj4mt") pod "adf16af8-86d8-4936-bdbe-0dab0602f51d" (UID: "adf16af8-86d8-4936-bdbe-0dab0602f51d"). InnerVolumeSpecName "kube-api-access-rj4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.866252 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "adf16af8-86d8-4936-bdbe-0dab0602f51d" (UID: "adf16af8-86d8-4936-bdbe-0dab0602f51d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.920023 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adf16af8-86d8-4936-bdbe-0dab0602f51d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.920073 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf16af8-86d8-4936-bdbe-0dab0602f51d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:03 crc kubenswrapper[4856]: I1203 09:45:03.920088 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4mt\" (UniqueName: \"kubernetes.io/projected/adf16af8-86d8-4936-bdbe-0dab0602f51d-kube-api-access-rj4mt\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:04 crc kubenswrapper[4856]: I1203 09:45:04.441593 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" event={"ID":"adf16af8-86d8-4936-bdbe-0dab0602f51d","Type":"ContainerDied","Data":"0074aedeb14ead9f6be809803b17f11c14f7950a7d49315901664f47ae4db6c9"} Dec 03 09:45:04 crc kubenswrapper[4856]: I1203 09:45:04.441667 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0074aedeb14ead9f6be809803b17f11c14f7950a7d49315901664f47ae4db6c9" Dec 03 09:45:04 crc kubenswrapper[4856]: I1203 09:45:04.442227 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412585-vnfmn" Dec 03 09:45:09 crc kubenswrapper[4856]: I1203 09:45:09.689141 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:45:09 crc kubenswrapper[4856]: E1203 09:45:09.690666 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:45:21 crc kubenswrapper[4856]: I1203 09:45:21.689506 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:45:21 crc kubenswrapper[4856]: E1203 09:45:21.690699 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:45:36 crc kubenswrapper[4856]: I1203 09:45:36.689988 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:45:36 crc kubenswrapper[4856]: E1203 09:45:36.691538 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:45:38 crc kubenswrapper[4856]: I1203 09:45:38.699632 4856 scope.go:117] "RemoveContainer" containerID="553b2d7bc16dd6daf85cdd2910366312aedc3f9b88a2423233643d0d1ab69409" Dec 03 09:45:38 crc kubenswrapper[4856]: I1203 09:45:38.786283 4856 scope.go:117] "RemoveContainer" containerID="f39b81c3f972b5170c090fb3893d58ce2a3797a6be9441aa44d82b044f47c132" Dec 03 09:45:38 crc kubenswrapper[4856]: I1203 09:45:38.873444 4856 scope.go:117] "RemoveContainer" containerID="b6d3d2dbe6f0a0d4c8adcde316808d767093fa73ea05ef45de38c726eef04054" Dec 03 09:45:44 crc kubenswrapper[4856]: I1203 09:45:44.968464 4856 generic.go:334] "Generic (PLEG): container finished" podID="77973943-428f-4578-91c1-ed94f2616c7e" containerID="034cbc134ab59a5683a04bbc71aea047cfbc31ffe58b6ffee37c78f5d9f15e5c" exitCode=0 Dec 03 09:45:44 crc kubenswrapper[4856]: I1203 09:45:44.968559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" event={"ID":"77973943-428f-4578-91c1-ed94f2616c7e","Type":"ContainerDied","Data":"034cbc134ab59a5683a04bbc71aea047cfbc31ffe58b6ffee37c78f5d9f15e5c"} Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.485231 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.654782 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.654922 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.654976 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655219 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655303 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655331 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfpx9\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655444 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655472 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655512 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655548 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655600 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.655624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle\") pod \"77973943-428f-4578-91c1-ed94f2616c7e\" (UID: \"77973943-428f-4578-91c1-ed94f2616c7e\") " Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.664774 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.665086 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.665270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9" (OuterVolumeSpecName: "kube-api-access-zfpx9") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "kube-api-access-zfpx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.666230 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.666994 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.667018 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.668824 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.669035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.669221 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.670749 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.672710 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.674996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.699655 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory" (OuterVolumeSpecName: "inventory") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.699770 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "77973943-428f-4578-91c1-ed94f2616c7e" (UID: "77973943-428f-4578-91c1-ed94f2616c7e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.757986 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758026 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758041 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758054 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758066 4856 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758076 4856 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758085 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758094 4856 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758103 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758112 4856 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758120 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758135 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfpx9\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-kube-api-access-zfpx9\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758144 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77973943-428f-4578-91c1-ed94f2616c7e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.758153 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/77973943-428f-4578-91c1-ed94f2616c7e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.993595 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" event={"ID":"77973943-428f-4578-91c1-ed94f2616c7e","Type":"ContainerDied","Data":"67d5763f4c0ed829978956401ab14278485a26d33e2028ed635e8f6d2953f4ad"} Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.994106 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d5763f4c0ed829978956401ab14278485a26d33e2028ed635e8f6d2953f4ad" Dec 03 09:45:46 crc kubenswrapper[4856]: I1203 09:45:46.993651 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.178893 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr"] Dec 03 09:45:47 crc kubenswrapper[4856]: E1203 09:45:47.181340 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf16af8-86d8-4936-bdbe-0dab0602f51d" containerName="collect-profiles" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.181549 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf16af8-86d8-4936-bdbe-0dab0602f51d" containerName="collect-profiles" Dec 03 09:45:47 crc kubenswrapper[4856]: E1203 09:45:47.187006 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77973943-428f-4578-91c1-ed94f2616c7e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.187259 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="77973943-428f-4578-91c1-ed94f2616c7e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.188523 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="77973943-428f-4578-91c1-ed94f2616c7e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.188621 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf16af8-86d8-4936-bdbe-0dab0602f51d" containerName="collect-profiles" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.190464 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.196197 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.196599 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.200952 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr"] Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.202140 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.202259 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.202158 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.284295 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2thw\" (UniqueName: \"kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.284779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.284961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.285059 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.285283 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.387746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.387902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2thw\" (UniqueName: \"kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.387932 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.387969 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.387990 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.389529 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.392818 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.392986 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.393047 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.412269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2thw\" (UniqueName: \"kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95flr\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:47 crc kubenswrapper[4856]: I1203 09:45:47.514703 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:45:48 crc kubenswrapper[4856]: I1203 09:45:48.107177 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr"] Dec 03 09:45:49 crc kubenswrapper[4856]: I1203 09:45:49.017616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" event={"ID":"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90","Type":"ContainerStarted","Data":"fd13bdae5c7c69fdc7048636fb4b817d0e52249635a0fcba036bf9d6595be574"} Dec 03 09:45:49 crc kubenswrapper[4856]: I1203 09:45:49.018357 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" event={"ID":"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90","Type":"ContainerStarted","Data":"ed220e1f32183828b45593039e60dc442fcc62bc08ca82d619239a4c44a30ccf"} Dec 03 09:45:49 crc kubenswrapper[4856]: I1203 09:45:49.040951 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" podStartSLOduration=1.501896419 podStartE2EDuration="2.040919034s" podCreationTimestamp="2025-12-03 09:45:47 +0000 UTC" firstStartedPulling="2025-12-03 09:45:48.120732895 +0000 UTC m=+2016.303625196" lastFinishedPulling="2025-12-03 09:45:48.65975551 +0000 UTC m=+2016.842647811" observedRunningTime="2025-12-03 09:45:49.033906986 +0000 UTC m=+2017.216799287" watchObservedRunningTime="2025-12-03 09:45:49.040919034 +0000 UTC m=+2017.223811335" Dec 03 09:45:50 crc kubenswrapper[4856]: I1203 09:45:50.689464 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:45:50 crc kubenswrapper[4856]: E1203 09:45:50.691614 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:46:04 crc kubenswrapper[4856]: I1203 09:46:04.690094 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:46:05 crc kubenswrapper[4856]: I1203 09:46:05.337829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa"} Dec 03 09:46:55 crc kubenswrapper[4856]: I1203 09:46:55.884888 4856 generic.go:334] "Generic (PLEG): container finished" podID="37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" containerID="fd13bdae5c7c69fdc7048636fb4b817d0e52249635a0fcba036bf9d6595be574" exitCode=0 Dec 03 09:46:55 crc kubenswrapper[4856]: I1203 09:46:55.884983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" event={"ID":"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90","Type":"ContainerDied","Data":"fd13bdae5c7c69fdc7048636fb4b817d0e52249635a0fcba036bf9d6595be574"} Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.368395 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.529129 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle\") pod \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.529238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2thw\" (UniqueName: \"kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw\") pod \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.529264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory\") pod \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.529493 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key\") pod \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.529542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0\") pod \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\" (UID: \"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90\") " Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.538829 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw" (OuterVolumeSpecName: "kube-api-access-k2thw") pod "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" (UID: "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90"). InnerVolumeSpecName "kube-api-access-k2thw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.539194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" (UID: "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.567186 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" (UID: "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.569993 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory" (OuterVolumeSpecName: "inventory") pod "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" (UID: "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.576832 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" (UID: "37eb2f8b-1352-4ee3-9f78-afe97fd4ad90"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.632308 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.632372 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2thw\" (UniqueName: \"kubernetes.io/projected/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-kube-api-access-k2thw\") on node \"crc\" DevicePath \"\"" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.632390 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.632401 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.632411 4856 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/37eb2f8b-1352-4ee3-9f78-afe97fd4ad90-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.907959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" event={"ID":"37eb2f8b-1352-4ee3-9f78-afe97fd4ad90","Type":"ContainerDied","Data":"ed220e1f32183828b45593039e60dc442fcc62bc08ca82d619239a4c44a30ccf"} Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.908033 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed220e1f32183828b45593039e60dc442fcc62bc08ca82d619239a4c44a30ccf" Dec 03 09:46:57 crc kubenswrapper[4856]: I1203 09:46:57.908040 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95flr" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.076967 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49"] Dec 03 09:46:58 crc kubenswrapper[4856]: E1203 09:46:58.077652 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.077682 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.077935 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="37eb2f8b-1352-4ee3-9f78-afe97fd4ad90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.078925 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.081364 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.083221 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.084015 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.084090 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.084178 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.084377 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.088280 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49"] Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.153619 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.154022 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbhs\" (UniqueName: \"kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.154110 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.154254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.154379 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.154424 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257621 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257696 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257915 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbhs\" (UniqueName: \"kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.257946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.263138 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.263170 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.263145 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.265574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.271396 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.280329 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbhs\" (UniqueName: \"kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:58 crc kubenswrapper[4856]: I1203 09:46:58.434612 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:46:59 crc kubenswrapper[4856]: I1203 09:46:59.021516 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49"] Dec 03 09:46:59 crc kubenswrapper[4856]: I1203 09:46:59.931043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" event={"ID":"e442bbaf-f226-4bed-a454-bbbaf90e44ff","Type":"ContainerStarted","Data":"7b73b3152baf630804cc0053d32d562cf90d12166fbe790a3f0dff35d89d2ba3"} Dec 03 09:47:00 crc kubenswrapper[4856]: I1203 09:47:00.943800 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" event={"ID":"e442bbaf-f226-4bed-a454-bbbaf90e44ff","Type":"ContainerStarted","Data":"edacf4d93f20822b08890ad47c12738812f032a189b86fa4cd192998ad661b3e"} Dec 03 09:47:00 crc kubenswrapper[4856]: I1203 09:47:00.963827 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" podStartSLOduration=2.303840343 podStartE2EDuration="2.963791393s" podCreationTimestamp="2025-12-03 09:46:58 +0000 UTC" firstStartedPulling="2025-12-03 09:46:59.030726332 +0000 UTC m=+2087.213618633" lastFinishedPulling="2025-12-03 09:46:59.690677382 +0000 UTC m=+2087.873569683" observedRunningTime="2025-12-03 09:47:00.959378351 +0000 UTC m=+2089.142270672" watchObservedRunningTime="2025-12-03 09:47:00.963791393 +0000 UTC m=+2089.146683694" Dec 03 09:47:50 crc kubenswrapper[4856]: I1203 09:47:50.501314 4856 generic.go:334] "Generic (PLEG): container finished" podID="e442bbaf-f226-4bed-a454-bbbaf90e44ff" containerID="edacf4d93f20822b08890ad47c12738812f032a189b86fa4cd192998ad661b3e" exitCode=0 Dec 03 09:47:50 crc kubenswrapper[4856]: I1203 09:47:50.501869 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" event={"ID":"e442bbaf-f226-4bed-a454-bbbaf90e44ff","Type":"ContainerDied","Data":"edacf4d93f20822b08890ad47c12738812f032a189b86fa4cd192998ad661b3e"} Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.027503 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.156466 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.156555 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.156622 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.156713 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.157760 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.160078 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zbhs\" (UniqueName: \"kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs\") pod \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\" (UID: \"e442bbaf-f226-4bed-a454-bbbaf90e44ff\") " Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.163623 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs" (OuterVolumeSpecName: "kube-api-access-7zbhs") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "kube-api-access-7zbhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.166087 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.190271 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory" (OuterVolumeSpecName: "inventory") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.190855 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.196112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.199547 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e442bbaf-f226-4bed-a454-bbbaf90e44ff" (UID: "e442bbaf-f226-4bed-a454-bbbaf90e44ff"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263641 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263679 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zbhs\" (UniqueName: \"kubernetes.io/projected/e442bbaf-f226-4bed-a454-bbbaf90e44ff-kube-api-access-7zbhs\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263691 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263700 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263710 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.263720 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e442bbaf-f226-4bed-a454-bbbaf90e44ff-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.521153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" event={"ID":"e442bbaf-f226-4bed-a454-bbbaf90e44ff","Type":"ContainerDied","Data":"7b73b3152baf630804cc0053d32d562cf90d12166fbe790a3f0dff35d89d2ba3"} Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.521196 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b73b3152baf630804cc0053d32d562cf90d12166fbe790a3f0dff35d89d2ba3" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.521256 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.629475 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz"] Dec 03 09:47:52 crc kubenswrapper[4856]: E1203 09:47:52.630178 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e442bbaf-f226-4bed-a454-bbbaf90e44ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.630199 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e442bbaf-f226-4bed-a454-bbbaf90e44ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.630481 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e442bbaf-f226-4bed-a454-bbbaf90e44ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.631389 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.642007 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz"] Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.672767 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.673124 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.673202 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.673335 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.673703 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.675958 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.676059 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.676278 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.676453 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.676544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpk2\" (UniqueName: \"kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.779098 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.779190 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.779225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpk2\" (UniqueName: \"kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.779288 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.779321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.786205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.786241 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.787281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.787381 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.798043 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpk2\" (UniqueName: \"kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:52 crc kubenswrapper[4856]: I1203 09:47:52.993672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:47:53 crc kubenswrapper[4856]: I1203 09:47:53.580752 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz"] Dec 03 09:47:54 crc kubenswrapper[4856]: I1203 09:47:54.544023 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" event={"ID":"484332af-13c0-4270-932a-181a6b3f879c","Type":"ContainerStarted","Data":"cef30378aceb1b63eadbfebdc9cf37f8917d1e208b32b825a6a92f7c0b734fa0"} Dec 03 09:47:54 crc kubenswrapper[4856]: I1203 09:47:54.544590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" event={"ID":"484332af-13c0-4270-932a-181a6b3f879c","Type":"ContainerStarted","Data":"b46663619ae4a50335d24a0723e5e0e027679f5dba761b8780803b0530728756"} Dec 03 09:47:54 crc kubenswrapper[4856]: I1203 09:47:54.573897 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" podStartSLOduration=2.075882697 podStartE2EDuration="2.573873491s" podCreationTimestamp="2025-12-03 09:47:52 +0000 UTC" firstStartedPulling="2025-12-03 09:47:53.591341754 +0000 UTC m=+2141.774234045" lastFinishedPulling="2025-12-03 09:47:54.089332548 +0000 UTC m=+2142.272224839" observedRunningTime="2025-12-03 09:47:54.567017566 +0000 UTC m=+2142.749909887" watchObservedRunningTime="2025-12-03 09:47:54.573873491 +0000 UTC m=+2142.756765792" Dec 03 09:48:22 crc kubenswrapper[4856]: I1203 09:48:22.758666 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:48:22 crc kubenswrapper[4856]: I1203 09:48:22.759577 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:48:52 crc kubenswrapper[4856]: I1203 09:48:52.759276 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:48:52 crc kubenswrapper[4856]: I1203 09:48:52.759976 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:49:22 crc kubenswrapper[4856]: I1203 09:49:22.758709 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:49:22 crc kubenswrapper[4856]: I1203 09:49:22.759625 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:49:22 crc kubenswrapper[4856]: I1203 09:49:22.759692 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:49:22 crc kubenswrapper[4856]: I1203 09:49:22.760923 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:49:22 crc kubenswrapper[4856]: I1203 09:49:22.761004 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa" gracePeriod=600 Dec 03 09:49:23 crc kubenswrapper[4856]: I1203 09:49:23.795794 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa" exitCode=0 Dec 03 09:49:23 crc kubenswrapper[4856]: I1203 09:49:23.795865 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa"} Dec 03 09:49:23 crc kubenswrapper[4856]: I1203 09:49:23.798388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4"} Dec 03 09:49:23 crc kubenswrapper[4856]: I1203 09:49:23.798424 4856 scope.go:117] "RemoveContainer" containerID="1925c31dbd27f94b009dde4363b7b3051b650c9a545090502c79ffe17ba31149" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.717049 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.719745 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.735905 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.747515 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.747956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhmg\" (UniqueName: \"kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.748098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.850143 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.850274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhmg\" (UniqueName: \"kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.850696 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.850704 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.850767 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:47 crc kubenswrapper[4856]: I1203 09:49:47.875405 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhmg\" (UniqueName: \"kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg\") pod \"community-operators-n8ztr\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:48 crc kubenswrapper[4856]: I1203 09:49:48.049792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:48 crc kubenswrapper[4856]: I1203 09:49:48.646159 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:49:49 crc kubenswrapper[4856]: I1203 09:49:49.071616 4856 generic.go:334] "Generic (PLEG): container finished" podID="668827f0-78cb-45df-aee0-746fb49416e2" containerID="ff37ab353808105c9c53e93a70c453e7a40a3f39ac9f1f13600b4e9a3e3201b7" exitCode=0 Dec 03 09:49:49 crc kubenswrapper[4856]: I1203 09:49:49.071704 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerDied","Data":"ff37ab353808105c9c53e93a70c453e7a40a3f39ac9f1f13600b4e9a3e3201b7"} Dec 03 09:49:49 crc kubenswrapper[4856]: I1203 09:49:49.071981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerStarted","Data":"cea19ce8a67a3c0fd8da2a229fb610f47c6fb89a691ab008d3a1827213c619e9"} Dec 03 09:49:49 crc kubenswrapper[4856]: I1203 09:49:49.074592 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:49:50 crc kubenswrapper[4856]: I1203 09:49:50.083775 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerStarted","Data":"c7cc36ac7b51fd72b2ab50f008e731f6955d49342dd2c9b227b92697a5d1e5a6"} Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.097640 4856 generic.go:334] "Generic (PLEG): container finished" podID="668827f0-78cb-45df-aee0-746fb49416e2" containerID="c7cc36ac7b51fd72b2ab50f008e731f6955d49342dd2c9b227b92697a5d1e5a6" exitCode=0 Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.097773 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerDied","Data":"c7cc36ac7b51fd72b2ab50f008e731f6955d49342dd2c9b227b92697a5d1e5a6"} Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.108748 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.110893 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.119242 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.234124 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.234230 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.234265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xm8j\" (UniqueName: \"kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.336617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.336735 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.336768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xm8j\" (UniqueName: \"kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.337262 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.337498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.365688 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xm8j\" (UniqueName: \"kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j\") pod \"redhat-marketplace-rvxgg\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.435917 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:49:51 crc kubenswrapper[4856]: I1203 09:49:51.976910 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:49:51 crc kubenswrapper[4856]: W1203 09:49:51.978419 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f8e461_2614_4abf_8664_3f7c8be6b16c.slice/crio-4a5dba395fddd66d52fb38f2898532ac02895d407bc56b556754d225f68d7d02 WatchSource:0}: Error finding container 4a5dba395fddd66d52fb38f2898532ac02895d407bc56b556754d225f68d7d02: Status 404 returned error can't find the container with id 4a5dba395fddd66d52fb38f2898532ac02895d407bc56b556754d225f68d7d02 Dec 03 09:49:52 crc kubenswrapper[4856]: I1203 09:49:52.108762 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerStarted","Data":"4a5dba395fddd66d52fb38f2898532ac02895d407bc56b556754d225f68d7d02"} Dec 03 09:49:53 crc kubenswrapper[4856]: I1203 09:49:53.129858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerStarted","Data":"61585aa7d6d03494488aefead83053fc451e7d88c395626c285141cf67c9d2cb"} Dec 03 09:49:53 crc kubenswrapper[4856]: I1203 09:49:53.134389 4856 generic.go:334] "Generic (PLEG): container finished" podID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerID="6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830" exitCode=0 Dec 03 09:49:53 crc kubenswrapper[4856]: I1203 09:49:53.134441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerDied","Data":"6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830"} Dec 03 09:49:53 crc kubenswrapper[4856]: I1203 09:49:53.165386 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n8ztr" podStartSLOduration=3.634002338 podStartE2EDuration="6.16535978s" podCreationTimestamp="2025-12-03 09:49:47 +0000 UTC" firstStartedPulling="2025-12-03 09:49:49.074221623 +0000 UTC m=+2257.257113924" lastFinishedPulling="2025-12-03 09:49:51.605579055 +0000 UTC m=+2259.788471366" observedRunningTime="2025-12-03 09:49:53.155967202 +0000 UTC m=+2261.338859503" watchObservedRunningTime="2025-12-03 09:49:53.16535978 +0000 UTC m=+2261.348252081" Dec 03 09:49:55 crc kubenswrapper[4856]: I1203 09:49:55.156444 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerDied","Data":"3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5"} Dec 03 09:49:55 crc kubenswrapper[4856]: I1203 09:49:55.156522 4856 generic.go:334] "Generic (PLEG): container finished" podID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerID="3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5" exitCode=0 Dec 03 09:49:56 crc kubenswrapper[4856]: I1203 09:49:56.173835 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerStarted","Data":"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9"} Dec 03 09:49:56 crc kubenswrapper[4856]: I1203 09:49:56.210488 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvxgg" podStartSLOduration=2.797648917 podStartE2EDuration="5.210450707s" podCreationTimestamp="2025-12-03 09:49:51 +0000 UTC" firstStartedPulling="2025-12-03 09:49:53.139331029 +0000 UTC m=+2261.322223330" lastFinishedPulling="2025-12-03 09:49:55.552132819 +0000 UTC m=+2263.735025120" observedRunningTime="2025-12-03 09:49:56.195101517 +0000 UTC m=+2264.377993848" watchObservedRunningTime="2025-12-03 09:49:56.210450707 +0000 UTC m=+2264.393343008" Dec 03 09:49:58 crc kubenswrapper[4856]: I1203 09:49:58.050330 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:58 crc kubenswrapper[4856]: I1203 09:49:58.051045 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:58 crc kubenswrapper[4856]: I1203 09:49:58.111176 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:58 crc kubenswrapper[4856]: I1203 09:49:58.348899 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:49:59 crc kubenswrapper[4856]: I1203 09:49:59.698941 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:50:00 crc kubenswrapper[4856]: I1203 09:50:00.315009 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n8ztr" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="registry-server" containerID="cri-o://61585aa7d6d03494488aefead83053fc451e7d88c395626c285141cf67c9d2cb" gracePeriod=2 Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.332855 4856 generic.go:334] "Generic (PLEG): container finished" podID="668827f0-78cb-45df-aee0-746fb49416e2" containerID="61585aa7d6d03494488aefead83053fc451e7d88c395626c285141cf67c9d2cb" exitCode=0 Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.332932 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerDied","Data":"61585aa7d6d03494488aefead83053fc451e7d88c395626c285141cf67c9d2cb"} Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.436312 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.436397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.449835 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.501396 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.621045 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities\") pod \"668827f0-78cb-45df-aee0-746fb49416e2\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.621205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhmg\" (UniqueName: \"kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg\") pod \"668827f0-78cb-45df-aee0-746fb49416e2\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.621510 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content\") pod \"668827f0-78cb-45df-aee0-746fb49416e2\" (UID: \"668827f0-78cb-45df-aee0-746fb49416e2\") " Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.622219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities" (OuterVolumeSpecName: "utilities") pod "668827f0-78cb-45df-aee0-746fb49416e2" (UID: "668827f0-78cb-45df-aee0-746fb49416e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.639783 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg" (OuterVolumeSpecName: "kube-api-access-kfhmg") pod "668827f0-78cb-45df-aee0-746fb49416e2" (UID: "668827f0-78cb-45df-aee0-746fb49416e2"). InnerVolumeSpecName "kube-api-access-kfhmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.673888 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "668827f0-78cb-45df-aee0-746fb49416e2" (UID: "668827f0-78cb-45df-aee0-746fb49416e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.725432 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.725474 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhmg\" (UniqueName: \"kubernetes.io/projected/668827f0-78cb-45df-aee0-746fb49416e2-kube-api-access-kfhmg\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:01 crc kubenswrapper[4856]: I1203 09:50:01.725490 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668827f0-78cb-45df-aee0-746fb49416e2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.346832 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8ztr" event={"ID":"668827f0-78cb-45df-aee0-746fb49416e2","Type":"ContainerDied","Data":"cea19ce8a67a3c0fd8da2a229fb610f47c6fb89a691ab008d3a1827213c619e9"} Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.346913 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8ztr" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.347343 4856 scope.go:117] "RemoveContainer" containerID="61585aa7d6d03494488aefead83053fc451e7d88c395626c285141cf67c9d2cb" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.387750 4856 scope.go:117] "RemoveContainer" containerID="c7cc36ac7b51fd72b2ab50f008e731f6955d49342dd2c9b227b92697a5d1e5a6" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.394754 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.409741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.412441 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n8ztr"] Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.426696 4856 scope.go:117] "RemoveContainer" containerID="ff37ab353808105c9c53e93a70c453e7a40a3f39ac9f1f13600b4e9a3e3201b7" Dec 03 09:50:02 crc kubenswrapper[4856]: I1203 09:50:02.700358 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668827f0-78cb-45df-aee0-746fb49416e2" path="/var/lib/kubelet/pods/668827f0-78cb-45df-aee0-746fb49416e2/volumes" Dec 03 09:50:03 crc kubenswrapper[4856]: I1203 09:50:03.899879 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:50:04 crc kubenswrapper[4856]: I1203 09:50:04.377210 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvxgg" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="registry-server" containerID="cri-o://a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9" gracePeriod=2 Dec 03 09:50:04 crc kubenswrapper[4856]: I1203 09:50:04.921700 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.115972 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content\") pod \"39f8e461-2614-4abf-8664-3f7c8be6b16c\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.116336 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xm8j\" (UniqueName: \"kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j\") pod \"39f8e461-2614-4abf-8664-3f7c8be6b16c\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.116415 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities\") pod \"39f8e461-2614-4abf-8664-3f7c8be6b16c\" (UID: \"39f8e461-2614-4abf-8664-3f7c8be6b16c\") " Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.121129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities" (OuterVolumeSpecName: "utilities") pod "39f8e461-2614-4abf-8664-3f7c8be6b16c" (UID: "39f8e461-2614-4abf-8664-3f7c8be6b16c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.134258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j" (OuterVolumeSpecName: "kube-api-access-7xm8j") pod "39f8e461-2614-4abf-8664-3f7c8be6b16c" (UID: "39f8e461-2614-4abf-8664-3f7c8be6b16c"). InnerVolumeSpecName "kube-api-access-7xm8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.137740 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39f8e461-2614-4abf-8664-3f7c8be6b16c" (UID: "39f8e461-2614-4abf-8664-3f7c8be6b16c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.219632 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.219679 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xm8j\" (UniqueName: \"kubernetes.io/projected/39f8e461-2614-4abf-8664-3f7c8be6b16c-kube-api-access-7xm8j\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.219695 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f8e461-2614-4abf-8664-3f7c8be6b16c-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.390831 4856 generic.go:334] "Generic (PLEG): container finished" podID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerID="a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9" exitCode=0 Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.390887 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerDied","Data":"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9"} Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.390937 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvxgg" event={"ID":"39f8e461-2614-4abf-8664-3f7c8be6b16c","Type":"ContainerDied","Data":"4a5dba395fddd66d52fb38f2898532ac02895d407bc56b556754d225f68d7d02"} Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.390934 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvxgg" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.390961 4856 scope.go:117] "RemoveContainer" containerID="a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.416852 4856 scope.go:117] "RemoveContainer" containerID="3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.440589 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.449239 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvxgg"] Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.457799 4856 scope.go:117] "RemoveContainer" containerID="6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.512463 4856 scope.go:117] "RemoveContainer" containerID="a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9" Dec 03 09:50:05 crc kubenswrapper[4856]: E1203 09:50:05.513325 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9\": container with ID starting with a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9 not found: ID does not exist" containerID="a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.513363 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9"} err="failed to get container status \"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9\": rpc error: code = NotFound desc = could not find container \"a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9\": container with ID starting with a366ad21336cad561fbae2d9744f4342213ae98815f139eb469b9d633609c3f9 not found: ID does not exist" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.513389 4856 scope.go:117] "RemoveContainer" containerID="3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5" Dec 03 09:50:05 crc kubenswrapper[4856]: E1203 09:50:05.514062 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5\": container with ID starting with 3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5 not found: ID does not exist" containerID="3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.514194 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5"} err="failed to get container status \"3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5\": rpc error: code = NotFound desc = could not find container \"3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5\": container with ID starting with 3703abb88ae2b94e006ac7a66227e50ec16b77ca348edd7bd5d802624d7acff5 not found: ID does not exist" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.514248 4856 scope.go:117] "RemoveContainer" containerID="6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830" Dec 03 09:50:05 crc kubenswrapper[4856]: E1203 09:50:05.514801 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830\": container with ID starting with 6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830 not found: ID does not exist" containerID="6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830" Dec 03 09:50:05 crc kubenswrapper[4856]: I1203 09:50:05.514928 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830"} err="failed to get container status \"6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830\": rpc error: code = NotFound desc = could not find container \"6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830\": container with ID starting with 6f7367bf5def8ab952a46dae3104d7638bbd40c03e3d3e0cf5c54e0b141b3830 not found: ID does not exist" Dec 03 09:50:06 crc kubenswrapper[4856]: I1203 09:50:06.706556 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" path="/var/lib/kubelet/pods/39f8e461-2614-4abf-8664-3f7c8be6b16c/volumes" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.388980 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xlfp"] Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390286 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390314 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390336 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="extract-utilities" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390343 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="extract-utilities" Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390366 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="extract-content" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390372 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="extract-content" Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390391 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="extract-utilities" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390400 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="extract-utilities" Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390415 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="extract-content" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390420 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="extract-content" Dec 03 09:50:54 crc kubenswrapper[4856]: E1203 09:50:54.390432 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390438 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390668 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f8e461-2614-4abf-8664-3f7c8be6b16c" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.390688 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="668827f0-78cb-45df-aee0-746fb49416e2" containerName="registry-server" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.392696 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.404770 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xlfp"] Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.506516 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-catalog-content\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.507359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9kc\" (UniqueName: \"kubernetes.io/projected/867da068-dbc1-4ec2-a12d-f443846bebd8-kube-api-access-mb9kc\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.507934 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-utilities\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.610690 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9kc\" (UniqueName: \"kubernetes.io/projected/867da068-dbc1-4ec2-a12d-f443846bebd8-kube-api-access-mb9kc\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.611020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-utilities\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.611170 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-catalog-content\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.611613 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-utilities\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.611820 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867da068-dbc1-4ec2-a12d-f443846bebd8-catalog-content\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.640691 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9kc\" (UniqueName: \"kubernetes.io/projected/867da068-dbc1-4ec2-a12d-f443846bebd8-kube-api-access-mb9kc\") pod \"certified-operators-8xlfp\" (UID: \"867da068-dbc1-4ec2-a12d-f443846bebd8\") " pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:54 crc kubenswrapper[4856]: I1203 09:50:54.730594 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:50:55 crc kubenswrapper[4856]: I1203 09:50:55.365782 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xlfp"] Dec 03 09:50:56 crc kubenswrapper[4856]: I1203 09:50:56.036443 4856 generic.go:334] "Generic (PLEG): container finished" podID="867da068-dbc1-4ec2-a12d-f443846bebd8" containerID="837211bb374f00addf404597881bb3d2fa8be14d3f7d5ade8edbf9e0f68c026b" exitCode=0 Dec 03 09:50:56 crc kubenswrapper[4856]: I1203 09:50:56.036567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xlfp" event={"ID":"867da068-dbc1-4ec2-a12d-f443846bebd8","Type":"ContainerDied","Data":"837211bb374f00addf404597881bb3d2fa8be14d3f7d5ade8edbf9e0f68c026b"} Dec 03 09:50:56 crc kubenswrapper[4856]: I1203 09:50:56.037098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xlfp" event={"ID":"867da068-dbc1-4ec2-a12d-f443846bebd8","Type":"ContainerStarted","Data":"804bcbab578b28fa56417b49d78ddca62e918b52f6a90dfd3aa774b0f9f55858"} Dec 03 09:51:02 crc kubenswrapper[4856]: I1203 09:51:02.133277 4856 generic.go:334] "Generic (PLEG): container finished" podID="867da068-dbc1-4ec2-a12d-f443846bebd8" containerID="3386966f2a95274eec4173c954769a4e714d2ffefc5d2c84fa61530322df5fe7" exitCode=0 Dec 03 09:51:02 crc kubenswrapper[4856]: I1203 09:51:02.133430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xlfp" event={"ID":"867da068-dbc1-4ec2-a12d-f443846bebd8","Type":"ContainerDied","Data":"3386966f2a95274eec4173c954769a4e714d2ffefc5d2c84fa61530322df5fe7"} Dec 03 09:51:03 crc kubenswrapper[4856]: I1203 09:51:03.152342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xlfp" event={"ID":"867da068-dbc1-4ec2-a12d-f443846bebd8","Type":"ContainerStarted","Data":"7c30e79e127987b725af618d4e5c632cf49158c2a8eabc441f0fb66fdd5aa5a1"} Dec 03 09:51:03 crc kubenswrapper[4856]: I1203 09:51:03.175520 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xlfp" podStartSLOduration=2.712445473 podStartE2EDuration="9.175497759s" podCreationTimestamp="2025-12-03 09:50:54 +0000 UTC" firstStartedPulling="2025-12-03 09:50:56.038386514 +0000 UTC m=+2324.221278815" lastFinishedPulling="2025-12-03 09:51:02.50143879 +0000 UTC m=+2330.684331101" observedRunningTime="2025-12-03 09:51:03.174365991 +0000 UTC m=+2331.357258282" watchObservedRunningTime="2025-12-03 09:51:03.175497759 +0000 UTC m=+2331.358390060" Dec 03 09:51:04 crc kubenswrapper[4856]: I1203 09:51:04.731009 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:51:04 crc kubenswrapper[4856]: I1203 09:51:04.731484 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:51:04 crc kubenswrapper[4856]: I1203 09:51:04.787338 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:51:14 crc kubenswrapper[4856]: I1203 09:51:14.787942 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xlfp" Dec 03 09:51:14 crc kubenswrapper[4856]: I1203 09:51:14.869947 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xlfp"] Dec 03 09:51:14 crc kubenswrapper[4856]: I1203 09:51:14.919205 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:51:14 crc kubenswrapper[4856]: I1203 09:51:14.919540 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hdmg" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="registry-server" containerID="cri-o://cd5ca6b7499a8d90a15acb54c7c443caf39e216ffe20d2e8d42645c87e7ae3b9" gracePeriod=2 Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.306455 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c211f61-54da-4cdd-b183-dcef0330433c" containerID="cd5ca6b7499a8d90a15acb54c7c443caf39e216ffe20d2e8d42645c87e7ae3b9" exitCode=0 Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.307134 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerDied","Data":"cd5ca6b7499a8d90a15acb54c7c443caf39e216ffe20d2e8d42645c87e7ae3b9"} Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.467847 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.605336 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phfkn\" (UniqueName: \"kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn\") pod \"9c211f61-54da-4cdd-b183-dcef0330433c\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.605448 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content\") pod \"9c211f61-54da-4cdd-b183-dcef0330433c\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.605508 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities\") pod \"9c211f61-54da-4cdd-b183-dcef0330433c\" (UID: \"9c211f61-54da-4cdd-b183-dcef0330433c\") " Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.606627 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities" (OuterVolumeSpecName: "utilities") pod "9c211f61-54da-4cdd-b183-dcef0330433c" (UID: "9c211f61-54da-4cdd-b183-dcef0330433c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.613251 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn" (OuterVolumeSpecName: "kube-api-access-phfkn") pod "9c211f61-54da-4cdd-b183-dcef0330433c" (UID: "9c211f61-54da-4cdd-b183-dcef0330433c"). InnerVolumeSpecName "kube-api-access-phfkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.654647 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c211f61-54da-4cdd-b183-dcef0330433c" (UID: "9c211f61-54da-4cdd-b183-dcef0330433c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.708646 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phfkn\" (UniqueName: \"kubernetes.io/projected/9c211f61-54da-4cdd-b183-dcef0330433c-kube-api-access-phfkn\") on node \"crc\" DevicePath \"\"" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.708704 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:51:15 crc kubenswrapper[4856]: I1203 09:51:15.708722 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c211f61-54da-4cdd-b183-dcef0330433c-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.323060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdmg" event={"ID":"9c211f61-54da-4cdd-b183-dcef0330433c","Type":"ContainerDied","Data":"5f63918a4c78de7c5c36040544104b4d101216965a28c9edd41f07c34404ca29"} Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.323139 4856 scope.go:117] "RemoveContainer" containerID="cd5ca6b7499a8d90a15acb54c7c443caf39e216ffe20d2e8d42645c87e7ae3b9" Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.323151 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdmg" Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.359923 4856 scope.go:117] "RemoveContainer" containerID="7cd2e50b446c01a0ab9a8b39b8d4cbc9132ad8f7f1ad9674559c279a9f47dcf1" Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.368494 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.380846 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hdmg"] Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.397896 4856 scope.go:117] "RemoveContainer" containerID="9140198d35ca16081a53136b2d5e067824a2a5591510f2f2f8a55a5d3d2d8a0c" Dec 03 09:51:16 crc kubenswrapper[4856]: I1203 09:51:16.701284 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" path="/var/lib/kubelet/pods/9c211f61-54da-4cdd-b183-dcef0330433c/volumes" Dec 03 09:51:52 crc kubenswrapper[4856]: I1203 09:51:52.759542 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:51:52 crc kubenswrapper[4856]: I1203 09:51:52.760186 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:52:22 crc kubenswrapper[4856]: I1203 09:52:22.759156 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:52:22 crc kubenswrapper[4856]: I1203 09:52:22.760174 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:52:24 crc kubenswrapper[4856]: I1203 09:52:24.157555 4856 generic.go:334] "Generic (PLEG): container finished" podID="484332af-13c0-4270-932a-181a6b3f879c" containerID="cef30378aceb1b63eadbfebdc9cf37f8917d1e208b32b825a6a92f7c0b734fa0" exitCode=0 Dec 03 09:52:24 crc kubenswrapper[4856]: I1203 09:52:24.157665 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" event={"ID":"484332af-13c0-4270-932a-181a6b3f879c","Type":"ContainerDied","Data":"cef30378aceb1b63eadbfebdc9cf37f8917d1e208b32b825a6a92f7c0b734fa0"} Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.623358 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.724581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0\") pod \"484332af-13c0-4270-932a-181a6b3f879c\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.724935 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle\") pod \"484332af-13c0-4270-932a-181a6b3f879c\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.725174 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpk2\" (UniqueName: \"kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2\") pod \"484332af-13c0-4270-932a-181a6b3f879c\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.725237 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory\") pod \"484332af-13c0-4270-932a-181a6b3f879c\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.725396 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key\") pod \"484332af-13c0-4270-932a-181a6b3f879c\" (UID: \"484332af-13c0-4270-932a-181a6b3f879c\") " Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.733347 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "484332af-13c0-4270-932a-181a6b3f879c" (UID: "484332af-13c0-4270-932a-181a6b3f879c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.734268 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2" (OuterVolumeSpecName: "kube-api-access-blpk2") pod "484332af-13c0-4270-932a-181a6b3f879c" (UID: "484332af-13c0-4270-932a-181a6b3f879c"). InnerVolumeSpecName "kube-api-access-blpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.759393 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory" (OuterVolumeSpecName: "inventory") pod "484332af-13c0-4270-932a-181a6b3f879c" (UID: "484332af-13c0-4270-932a-181a6b3f879c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.762481 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "484332af-13c0-4270-932a-181a6b3f879c" (UID: "484332af-13c0-4270-932a-181a6b3f879c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.762646 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "484332af-13c0-4270-932a-181a6b3f879c" (UID: "484332af-13c0-4270-932a-181a6b3f879c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.828217 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.828262 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpk2\" (UniqueName: \"kubernetes.io/projected/484332af-13c0-4270-932a-181a6b3f879c-kube-api-access-blpk2\") on node \"crc\" DevicePath \"\"" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.828275 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.828285 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:52:25 crc kubenswrapper[4856]: I1203 09:52:25.828295 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/484332af-13c0-4270-932a-181a6b3f879c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.178891 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" event={"ID":"484332af-13c0-4270-932a-181a6b3f879c","Type":"ContainerDied","Data":"b46663619ae4a50335d24a0723e5e0e027679f5dba761b8780803b0530728756"} Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.178937 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46663619ae4a50335d24a0723e5e0e027679f5dba761b8780803b0530728756" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.179000 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.293492 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f"] Dec 03 09:52:26 crc kubenswrapper[4856]: E1203 09:52:26.294052 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="extract-content" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294075 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="extract-content" Dec 03 09:52:26 crc kubenswrapper[4856]: E1203 09:52:26.294098 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484332af-13c0-4270-932a-181a6b3f879c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294106 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="484332af-13c0-4270-932a-181a6b3f879c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 09:52:26 crc kubenswrapper[4856]: E1203 09:52:26.294137 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="extract-utilities" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294144 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="extract-utilities" Dec 03 09:52:26 crc kubenswrapper[4856]: E1203 09:52:26.294157 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="registry-server" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294165 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="registry-server" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294379 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c211f61-54da-4cdd-b183-dcef0330433c" containerName="registry-server" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.294405 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="484332af-13c0-4270-932a-181a6b3f879c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.295104 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.298915 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.299378 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.299389 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.299438 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.299546 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.304683 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.313821 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.316714 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f"] Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.445939 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446102 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446162 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4nj\" (UniqueName: \"kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446190 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446297 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.446322 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548361 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548411 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548446 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548499 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548524 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4nj\" (UniqueName: \"kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548610 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.548638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.550617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.555621 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.555860 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.555869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.556277 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.556310 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.556576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.558373 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.575339 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4nj\" (UniqueName: \"kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w9n5f\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:26 crc kubenswrapper[4856]: I1203 09:52:26.620748 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:52:27 crc kubenswrapper[4856]: I1203 09:52:27.222442 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f"] Dec 03 09:52:28 crc kubenswrapper[4856]: I1203 09:52:28.200773 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" event={"ID":"ebee317e-98e4-499f-91e9-fefdaa0dd0e3","Type":"ContainerStarted","Data":"aefb54a3db115af945a594c8b0e4f6e9b13da9d31360752256d74fe0a4bf1eab"} Dec 03 09:52:28 crc kubenswrapper[4856]: I1203 09:52:28.201902 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" event={"ID":"ebee317e-98e4-499f-91e9-fefdaa0dd0e3","Type":"ContainerStarted","Data":"510bc3e56263445f85c2fbd1a9a1daceaccc19154b599dfbf5dce1c92edaebae"} Dec 03 09:52:28 crc kubenswrapper[4856]: I1203 09:52:28.237326 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" podStartSLOduration=1.768207609 podStartE2EDuration="2.237279702s" podCreationTimestamp="2025-12-03 09:52:26 +0000 UTC" firstStartedPulling="2025-12-03 09:52:27.22627822 +0000 UTC m=+2415.409170521" lastFinishedPulling="2025-12-03 09:52:27.695350323 +0000 UTC m=+2415.878242614" observedRunningTime="2025-12-03 09:52:28.225941046 +0000 UTC m=+2416.408833357" watchObservedRunningTime="2025-12-03 09:52:28.237279702 +0000 UTC m=+2416.420172003" Dec 03 09:52:52 crc kubenswrapper[4856]: I1203 09:52:52.759163 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 09:52:52 crc kubenswrapper[4856]: I1203 09:52:52.759855 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 09:52:52 crc kubenswrapper[4856]: I1203 09:52:52.759930 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 09:52:52 crc kubenswrapper[4856]: I1203 09:52:52.761251 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 09:52:52 crc kubenswrapper[4856]: I1203 09:52:52.761324 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" gracePeriod=600 Dec 03 09:52:52 crc kubenswrapper[4856]: E1203 09:52:52.905151 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:52:53 crc kubenswrapper[4856]: I1203 09:52:53.525157 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" exitCode=0 Dec 03 09:52:53 crc kubenswrapper[4856]: I1203 09:52:53.525629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4"} Dec 03 09:52:53 crc kubenswrapper[4856]: I1203 09:52:53.525685 4856 scope.go:117] "RemoveContainer" containerID="b42a1cc69efc601feed340cd22d1f8f16720df8a4255b6c2d037f29e60c7f9aa" Dec 03 09:52:53 crc kubenswrapper[4856]: I1203 09:52:53.526614 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:52:53 crc kubenswrapper[4856]: E1203 09:52:53.526897 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:53:08 crc kubenswrapper[4856]: I1203 09:53:08.689325 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:53:08 crc kubenswrapper[4856]: E1203 09:53:08.690393 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:53:21 crc kubenswrapper[4856]: I1203 09:53:21.690732 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:53:21 crc kubenswrapper[4856]: E1203 09:53:21.692231 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:53:34 crc kubenswrapper[4856]: I1203 09:53:34.688776 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:53:34 crc kubenswrapper[4856]: E1203 09:53:34.689777 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:53:46 crc kubenswrapper[4856]: I1203 09:53:46.689269 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:53:46 crc kubenswrapper[4856]: E1203 09:53:46.690449 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:53:57 crc kubenswrapper[4856]: I1203 09:53:57.689452 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:53:57 crc kubenswrapper[4856]: E1203 09:53:57.690434 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:09 crc kubenswrapper[4856]: I1203 09:54:09.690271 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:54:09 crc kubenswrapper[4856]: E1203 09:54:09.691335 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:20 crc kubenswrapper[4856]: I1203 09:54:20.689239 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:54:20 crc kubenswrapper[4856]: E1203 09:54:20.690248 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:31 crc kubenswrapper[4856]: I1203 09:54:31.689097 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:54:31 crc kubenswrapper[4856]: E1203 09:54:31.691380 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:43 crc kubenswrapper[4856]: I1203 09:54:43.689525 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:54:43 crc kubenswrapper[4856]: E1203 09:54:43.690471 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:55 crc kubenswrapper[4856]: I1203 09:54:55.689616 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:54:55 crc kubenswrapper[4856]: E1203 09:54:55.690483 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.848139 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.850439 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.871099 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.961531 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbc9\" (UniqueName: \"kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.961692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:56 crc kubenswrapper[4856]: I1203 09:54:56.962006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.064666 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.065090 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.065260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbc9\" (UniqueName: \"kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.065417 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.065645 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.092366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbc9\" (UniqueName: \"kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9\") pod \"redhat-operators-st7xj\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.178992 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.716516 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.967368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerStarted","Data":"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412"} Dec 03 09:54:57 crc kubenswrapper[4856]: I1203 09:54:57.968020 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerStarted","Data":"d7100174deaa5c1b373ad0eb7a8f880ca519a16a24987d3744ef9474848f1e4f"} Dec 03 09:54:58 crc kubenswrapper[4856]: I1203 09:54:58.979737 4856 generic.go:334] "Generic (PLEG): container finished" podID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerID="a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412" exitCode=0 Dec 03 09:54:58 crc kubenswrapper[4856]: I1203 09:54:58.979967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerDied","Data":"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412"} Dec 03 09:54:58 crc kubenswrapper[4856]: I1203 09:54:58.984502 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 09:54:59 crc kubenswrapper[4856]: I1203 09:54:59.993100 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerStarted","Data":"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb"} Dec 03 09:55:01 crc kubenswrapper[4856]: I1203 09:55:01.011054 4856 generic.go:334] "Generic (PLEG): container finished" podID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerID="cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb" exitCode=0 Dec 03 09:55:01 crc kubenswrapper[4856]: I1203 09:55:01.011144 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerDied","Data":"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb"} Dec 03 09:55:02 crc kubenswrapper[4856]: I1203 09:55:02.023772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerStarted","Data":"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31"} Dec 03 09:55:02 crc kubenswrapper[4856]: I1203 09:55:02.047483 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-st7xj" podStartSLOduration=3.429598015 podStartE2EDuration="6.047460726s" podCreationTimestamp="2025-12-03 09:54:56 +0000 UTC" firstStartedPulling="2025-12-03 09:54:58.984258381 +0000 UTC m=+2567.167150682" lastFinishedPulling="2025-12-03 09:55:01.602121092 +0000 UTC m=+2569.785013393" observedRunningTime="2025-12-03 09:55:02.041551697 +0000 UTC m=+2570.224443998" watchObservedRunningTime="2025-12-03 09:55:02.047460726 +0000 UTC m=+2570.230353027" Dec 03 09:55:07 crc kubenswrapper[4856]: I1203 09:55:07.179764 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:07 crc kubenswrapper[4856]: I1203 09:55:07.180831 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:07 crc kubenswrapper[4856]: I1203 09:55:07.235618 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:08 crc kubenswrapper[4856]: I1203 09:55:08.148593 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:08 crc kubenswrapper[4856]: I1203 09:55:08.238274 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.106164 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-st7xj" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="registry-server" containerID="cri-o://6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31" gracePeriod=2 Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.619682 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.689088 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:55:10 crc kubenswrapper[4856]: E1203 09:55:10.689470 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.808928 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbc9\" (UniqueName: \"kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9\") pod \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.809310 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content\") pod \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.809401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities\") pod \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\" (UID: \"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe\") " Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.810451 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities" (OuterVolumeSpecName: "utilities") pod "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" (UID: "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.817982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9" (OuterVolumeSpecName: "kube-api-access-7fbc9") pod "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" (UID: "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe"). InnerVolumeSpecName "kube-api-access-7fbc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.911895 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbc9\" (UniqueName: \"kubernetes.io/projected/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-kube-api-access-7fbc9\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.911933 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:10 crc kubenswrapper[4856]: I1203 09:55:10.934223 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" (UID: "3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.014002 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.119595 4856 generic.go:334] "Generic (PLEG): container finished" podID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerID="6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31" exitCode=0 Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.119660 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st7xj" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.119670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerDied","Data":"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31"} Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.119753 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st7xj" event={"ID":"3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe","Type":"ContainerDied","Data":"d7100174deaa5c1b373ad0eb7a8f880ca519a16a24987d3744ef9474848f1e4f"} Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.119799 4856 scope.go:117] "RemoveContainer" containerID="6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.159489 4856 scope.go:117] "RemoveContainer" containerID="cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.160637 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.172251 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-st7xj"] Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.196236 4856 scope.go:117] "RemoveContainer" containerID="a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.234253 4856 scope.go:117] "RemoveContainer" containerID="6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31" Dec 03 09:55:11 crc kubenswrapper[4856]: E1203 09:55:11.235373 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31\": container with ID starting with 6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31 not found: ID does not exist" containerID="6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.235449 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31"} err="failed to get container status \"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31\": rpc error: code = NotFound desc = could not find container \"6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31\": container with ID starting with 6488b119bbdf8bab8ac8793ec826deb03c40ae15349ad6e21db18e67d2685d31 not found: ID does not exist" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.235492 4856 scope.go:117] "RemoveContainer" containerID="cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb" Dec 03 09:55:11 crc kubenswrapper[4856]: E1203 09:55:11.236935 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb\": container with ID starting with cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb not found: ID does not exist" containerID="cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.237023 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb"} err="failed to get container status \"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb\": rpc error: code = NotFound desc = could not find container \"cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb\": container with ID starting with cce74620c2b6fb94d32c313064ce41b7a707d64e012883699f344d95433422eb not found: ID does not exist" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.237066 4856 scope.go:117] "RemoveContainer" containerID="a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412" Dec 03 09:55:11 crc kubenswrapper[4856]: E1203 09:55:11.237560 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412\": container with ID starting with a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412 not found: ID does not exist" containerID="a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412" Dec 03 09:55:11 crc kubenswrapper[4856]: I1203 09:55:11.237596 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412"} err="failed to get container status \"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412\": rpc error: code = NotFound desc = could not find container \"a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412\": container with ID starting with a7434d5d5f28cdb41f6f60302a44a8adec9ed438f3bd8e409944506aab250412 not found: ID does not exist" Dec 03 09:55:12 crc kubenswrapper[4856]: I1203 09:55:12.718656 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" path="/var/lib/kubelet/pods/3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe/volumes" Dec 03 09:55:13 crc kubenswrapper[4856]: I1203 09:55:13.154119 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebee317e-98e4-499f-91e9-fefdaa0dd0e3" containerID="aefb54a3db115af945a594c8b0e4f6e9b13da9d31360752256d74fe0a4bf1eab" exitCode=0 Dec 03 09:55:13 crc kubenswrapper[4856]: I1203 09:55:13.154171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" event={"ID":"ebee317e-98e4-499f-91e9-fefdaa0dd0e3","Type":"ContainerDied","Data":"aefb54a3db115af945a594c8b0e4f6e9b13da9d31360752256d74fe0a4bf1eab"} Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.646209 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.797979 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4nj\" (UniqueName: \"kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798122 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798176 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798253 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798435 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798515 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798598 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798627 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.798733 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory\") pod \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\" (UID: \"ebee317e-98e4-499f-91e9-fefdaa0dd0e3\") " Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.807032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj" (OuterVolumeSpecName: "kube-api-access-gg4nj") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "kube-api-access-gg4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.807604 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.836514 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.840288 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.842361 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.844946 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.845471 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory" (OuterVolumeSpecName: "inventory") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.852568 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.855468 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ebee317e-98e4-499f-91e9-fefdaa0dd0e3" (UID: "ebee317e-98e4-499f-91e9-fefdaa0dd0e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901136 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901176 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901194 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901210 4856 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901220 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901232 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4nj\" (UniqueName: \"kubernetes.io/projected/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-kube-api-access-gg4nj\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901240 4856 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901249 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:14 crc kubenswrapper[4856]: I1203 09:55:14.901257 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebee317e-98e4-499f-91e9-fefdaa0dd0e3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.175650 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" event={"ID":"ebee317e-98e4-499f-91e9-fefdaa0dd0e3","Type":"ContainerDied","Data":"510bc3e56263445f85c2fbd1a9a1daceaccc19154b599dfbf5dce1c92edaebae"} Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.175716 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510bc3e56263445f85c2fbd1a9a1daceaccc19154b599dfbf5dce1c92edaebae" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.175826 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w9n5f" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.301185 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm"] Dec 03 09:55:15 crc kubenswrapper[4856]: E1203 09:55:15.302484 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="extract-content" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.302518 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="extract-content" Dec 03 09:55:15 crc kubenswrapper[4856]: E1203 09:55:15.302747 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebee317e-98e4-499f-91e9-fefdaa0dd0e3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.302758 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebee317e-98e4-499f-91e9-fefdaa0dd0e3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 09:55:15 crc kubenswrapper[4856]: E1203 09:55:15.302778 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="extract-utilities" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.302790 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="extract-utilities" Dec 03 09:55:15 crc kubenswrapper[4856]: E1203 09:55:15.302838 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="registry-server" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.302848 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="registry-server" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.303117 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3faa91b8-b5af-43f0-ac1e-3c25fbaf9bbe" containerName="registry-server" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.303149 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebee317e-98e4-499f-91e9-fefdaa0dd0e3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.305288 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.311508 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.311614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.311655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.312089 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.312422 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9lv\" (UniqueName: \"kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.312659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.312744 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.312752 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.316261 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvktc" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.316409 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.316490 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.316680 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.339321 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm"] Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.415365 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.415846 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9lv\" (UniqueName: \"kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.415971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.416074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.416199 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.416301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.416392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.421765 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.421949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.422171 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.423064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.424109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.429861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.450257 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9lv\" (UniqueName: \"kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:15 crc kubenswrapper[4856]: I1203 09:55:15.628829 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:55:16 crc kubenswrapper[4856]: I1203 09:55:16.220952 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm"] Dec 03 09:55:17 crc kubenswrapper[4856]: I1203 09:55:17.199468 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" event={"ID":"f61fc35d-84b0-4d7c-8567-5457a1adfc58","Type":"ContainerStarted","Data":"afa5ab6cc0c20f1862499d40717193783591ce807b2637030429f1e8404dae7a"} Dec 03 09:55:17 crc kubenswrapper[4856]: I1203 09:55:17.200035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" event={"ID":"f61fc35d-84b0-4d7c-8567-5457a1adfc58","Type":"ContainerStarted","Data":"282ba8962156f0dd0f3a542978aea4662ec16a638aa202ea1d95e10bb3ecd4b2"} Dec 03 09:55:17 crc kubenswrapper[4856]: I1203 09:55:17.223699 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" podStartSLOduration=1.7541489239999999 podStartE2EDuration="2.223677197s" podCreationTimestamp="2025-12-03 09:55:15 +0000 UTC" firstStartedPulling="2025-12-03 09:55:16.226413533 +0000 UTC m=+2584.409305834" lastFinishedPulling="2025-12-03 09:55:16.695941806 +0000 UTC m=+2584.878834107" observedRunningTime="2025-12-03 09:55:17.215724737 +0000 UTC m=+2585.398617038" watchObservedRunningTime="2025-12-03 09:55:17.223677197 +0000 UTC m=+2585.406569498" Dec 03 09:55:22 crc kubenswrapper[4856]: I1203 09:55:22.696053 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:55:22 crc kubenswrapper[4856]: E1203 09:55:22.696734 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:55:37 crc kubenswrapper[4856]: I1203 09:55:37.690441 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:55:37 crc kubenswrapper[4856]: E1203 09:55:37.691838 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:55:49 crc kubenswrapper[4856]: I1203 09:55:49.690109 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:55:49 crc kubenswrapper[4856]: E1203 09:55:49.690988 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:56:02 crc kubenswrapper[4856]: I1203 09:56:02.698460 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:56:02 crc kubenswrapper[4856]: E1203 09:56:02.699522 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:56:13 crc kubenswrapper[4856]: I1203 09:56:13.690067 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:56:13 crc kubenswrapper[4856]: E1203 09:56:13.690748 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:56:28 crc kubenswrapper[4856]: I1203 09:56:28.689641 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:56:28 crc kubenswrapper[4856]: E1203 09:56:28.690790 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:56:43 crc kubenswrapper[4856]: I1203 09:56:43.689949 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:56:43 crc kubenswrapper[4856]: E1203 09:56:43.691045 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:56:54 crc kubenswrapper[4856]: I1203 09:56:54.690563 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:56:54 crc kubenswrapper[4856]: E1203 09:56:54.691891 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:57:05 crc kubenswrapper[4856]: I1203 09:57:05.690889 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:57:05 crc kubenswrapper[4856]: E1203 09:57:05.692260 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:57:16 crc kubenswrapper[4856]: I1203 09:57:16.691137 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:57:16 crc kubenswrapper[4856]: E1203 09:57:16.692631 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:57:31 crc kubenswrapper[4856]: I1203 09:57:31.689213 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:57:31 crc kubenswrapper[4856]: E1203 09:57:31.690149 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:57:39 crc kubenswrapper[4856]: I1203 09:57:39.692846 4856 generic.go:334] "Generic (PLEG): container finished" podID="f61fc35d-84b0-4d7c-8567-5457a1adfc58" containerID="afa5ab6cc0c20f1862499d40717193783591ce807b2637030429f1e8404dae7a" exitCode=0 Dec 03 09:57:39 crc kubenswrapper[4856]: I1203 09:57:39.692937 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" event={"ID":"f61fc35d-84b0-4d7c-8567-5457a1adfc58","Type":"ContainerDied","Data":"afa5ab6cc0c20f1862499d40717193783591ce807b2637030429f1e8404dae7a"} Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.158005 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.339588 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.339661 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.339702 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.339823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.339962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9lv\" (UniqueName: \"kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.340062 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.340106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1\") pod \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\" (UID: \"f61fc35d-84b0-4d7c-8567-5457a1adfc58\") " Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.347326 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.349613 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv" (OuterVolumeSpecName: "kube-api-access-sb9lv") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "kube-api-access-sb9lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.375767 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.378627 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory" (OuterVolumeSpecName: "inventory") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.378664 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.379590 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.390941 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f61fc35d-84b0-4d7c-8567-5457a1adfc58" (UID: "f61fc35d-84b0-4d7c-8567-5457a1adfc58"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449107 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb9lv\" (UniqueName: \"kubernetes.io/projected/f61fc35d-84b0-4d7c-8567-5457a1adfc58-kube-api-access-sb9lv\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449154 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449165 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449187 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449204 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449216 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.449228 4856 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61fc35d-84b0-4d7c-8567-5457a1adfc58-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.722663 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" event={"ID":"f61fc35d-84b0-4d7c-8567-5457a1adfc58","Type":"ContainerDied","Data":"282ba8962156f0dd0f3a542978aea4662ec16a638aa202ea1d95e10bb3ecd4b2"} Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.722761 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282ba8962156f0dd0f3a542978aea4662ec16a638aa202ea1d95e10bb3ecd4b2" Dec 03 09:57:41 crc kubenswrapper[4856]: I1203 09:57:41.722911 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm" Dec 03 09:57:46 crc kubenswrapper[4856]: I1203 09:57:46.689889 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:57:46 crc kubenswrapper[4856]: E1203 09:57:46.691300 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 09:58:01 crc kubenswrapper[4856]: I1203 09:58:01.689896 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 09:58:02 crc kubenswrapper[4856]: I1203 09:58:02.949011 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72"} Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.436877 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 09:58:35 crc kubenswrapper[4856]: E1203 09:58:35.438580 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61fc35d-84b0-4d7c-8567-5457a1adfc58" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.438604 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61fc35d-84b0-4d7c-8567-5457a1adfc58" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.438907 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61fc35d-84b0-4d7c-8567-5457a1adfc58" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.439995 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.445727 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.445946 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.446007 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x2zqx" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.446124 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.458988 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548022 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548085 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548128 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548150 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548220 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrqn\" (UniqueName: \"kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548577 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548674 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.548715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.651869 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.651980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.652011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.652035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.652086 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrqn\" (UniqueName: \"kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.652287 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.652315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.653368 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.654086 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.654138 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.654446 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.654913 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.655124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.655786 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.662199 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.662568 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.664655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.672197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrqn\" (UniqueName: \"kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.710434 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " pod="openstack/tempest-tests-tempest" Dec 03 09:58:35 crc kubenswrapper[4856]: I1203 09:58:35.773150 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 09:58:36 crc kubenswrapper[4856]: I1203 09:58:36.255773 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 09:58:36 crc kubenswrapper[4856]: I1203 09:58:36.366254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"105b5a9b-c81b-43d5-bea0-7bfd062ed807","Type":"ContainerStarted","Data":"70147358a1716374966221e86cd28c4e121e663a7887e1c5f3b08fed46f0b335"} Dec 03 09:59:16 crc kubenswrapper[4856]: E1203 09:59:16.720376 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 03 09:59:16 crc kubenswrapper[4856]: E1203 09:59:16.722341 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgrqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(105b5a9b-c81b-43d5-bea0-7bfd062ed807): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 09:59:16 crc kubenswrapper[4856]: E1203 09:59:16.723733 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" Dec 03 09:59:16 crc kubenswrapper[4856]: E1203 09:59:16.946826 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" Dec 03 09:59:30 crc kubenswrapper[4856]: I1203 09:59:30.169655 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 09:59:32 crc kubenswrapper[4856]: I1203 09:59:32.098966 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"105b5a9b-c81b-43d5-bea0-7bfd062ed807","Type":"ContainerStarted","Data":"ed65c87387fc6a0f30a85b4d7e6362bfd1a95de33163395a77a899c74898069a"} Dec 03 09:59:32 crc kubenswrapper[4856]: I1203 09:59:32.139891 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.236666169 podStartE2EDuration="58.13985583s" podCreationTimestamp="2025-12-03 09:58:34 +0000 UTC" firstStartedPulling="2025-12-03 09:58:36.263379377 +0000 UTC m=+2784.446271678" lastFinishedPulling="2025-12-03 09:59:30.166569038 +0000 UTC m=+2838.349461339" observedRunningTime="2025-12-03 09:59:32.127975765 +0000 UTC m=+2840.310868066" watchObservedRunningTime="2025-12-03 09:59:32.13985583 +0000 UTC m=+2840.322748141" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.160658 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw"] Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.162535 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.164949 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.165838 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.200936 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw"] Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.234714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7blv\" (UniqueName: \"kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.235300 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.235507 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.340256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.341022 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.341379 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7blv\" (UniqueName: \"kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.342431 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.350700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.361556 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7blv\" (UniqueName: \"kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv\") pod \"collect-profiles-29412600-rh2zw\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:00 crc kubenswrapper[4856]: I1203 10:00:00.503071 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:01 crc kubenswrapper[4856]: I1203 10:00:01.035887 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw"] Dec 03 10:00:01 crc kubenswrapper[4856]: I1203 10:00:01.465880 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" event={"ID":"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e","Type":"ContainerStarted","Data":"da75dbda04d0ab3dde1dd47ce069c0a08aaa238c055f7db5b799737486450227"} Dec 03 10:00:01 crc kubenswrapper[4856]: I1203 10:00:01.466331 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" event={"ID":"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e","Type":"ContainerStarted","Data":"cfb4776c2abec691bdc17a3a0a00715286fe795d928199ee8cc18e61f2329d0d"} Dec 03 10:00:01 crc kubenswrapper[4856]: I1203 10:00:01.496698 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" podStartSLOduration=1.496667512 podStartE2EDuration="1.496667512s" podCreationTimestamp="2025-12-03 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:00:01.483872624 +0000 UTC m=+2869.666764925" watchObservedRunningTime="2025-12-03 10:00:01.496667512 +0000 UTC m=+2869.679559803" Dec 03 10:00:02 crc kubenswrapper[4856]: I1203 10:00:02.478715 4856 generic.go:334] "Generic (PLEG): container finished" podID="c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" containerID="da75dbda04d0ab3dde1dd47ce069c0a08aaa238c055f7db5b799737486450227" exitCode=0 Dec 03 10:00:02 crc kubenswrapper[4856]: I1203 10:00:02.478790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" event={"ID":"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e","Type":"ContainerDied","Data":"da75dbda04d0ab3dde1dd47ce069c0a08aaa238c055f7db5b799737486450227"} Dec 03 10:00:03 crc kubenswrapper[4856]: I1203 10:00:03.935120 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.044693 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume\") pod \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.045113 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume\") pod \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.045170 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7blv\" (UniqueName: \"kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv\") pod \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\" (UID: \"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e\") " Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.045772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" (UID: "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.046318 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.053756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv" (OuterVolumeSpecName: "kube-api-access-h7blv") pod "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" (UID: "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e"). InnerVolumeSpecName "kube-api-access-h7blv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.055029 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" (UID: "c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.148882 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.148940 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7blv\" (UniqueName: \"kubernetes.io/projected/c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e-kube-api-access-h7blv\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.500059 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" event={"ID":"c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e","Type":"ContainerDied","Data":"cfb4776c2abec691bdc17a3a0a00715286fe795d928199ee8cc18e61f2329d0d"} Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.500129 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb4776c2abec691bdc17a3a0a00715286fe795d928199ee8cc18e61f2329d0d" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.500235 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412600-rh2zw" Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.591133 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj"] Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.602560 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412555-f5qgj"] Dec 03 10:00:04 crc kubenswrapper[4856]: I1203 10:00:04.707259 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13cce6b-a09e-4736-88b2-8212ae48ee93" path="/var/lib/kubelet/pods/a13cce6b-a09e-4736-88b2-8212ae48ee93/volumes" Dec 03 10:00:22 crc kubenswrapper[4856]: I1203 10:00:22.758681 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:00:22 crc kubenswrapper[4856]: I1203 10:00:22.759673 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.854674 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:32 crc kubenswrapper[4856]: E1203 10:00:32.858340 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" containerName="collect-profiles" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.858760 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" containerName="collect-profiles" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.859683 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6fc3bd9-52fd-4aaf-ba7e-1cb565db2c5e" containerName="collect-profiles" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.862535 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.874394 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.925654 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.925865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkg2\" (UniqueName: \"kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:32 crc kubenswrapper[4856]: I1203 10:00:32.926003 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.027842 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.027951 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkg2\" (UniqueName: \"kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.028054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.028655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.028717 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.072027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkg2\" (UniqueName: \"kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2\") pod \"community-operators-5zc7h\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.198663 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:33 crc kubenswrapper[4856]: I1203 10:00:33.869749 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:34 crc kubenswrapper[4856]: I1203 10:00:34.829606 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b71c272-003f-4102-b96c-100c2b814018" containerID="84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49" exitCode=0 Dec 03 10:00:34 crc kubenswrapper[4856]: I1203 10:00:34.829720 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerDied","Data":"84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49"} Dec 03 10:00:34 crc kubenswrapper[4856]: I1203 10:00:34.830421 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerStarted","Data":"df997b4fcc520146259836d39ff5d3adfec0efc7d3ea14eb21755c07c792e946"} Dec 03 10:00:34 crc kubenswrapper[4856]: I1203 10:00:34.833003 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 10:00:35 crc kubenswrapper[4856]: I1203 10:00:35.846560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerStarted","Data":"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba"} Dec 03 10:00:36 crc kubenswrapper[4856]: I1203 10:00:36.859198 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b71c272-003f-4102-b96c-100c2b814018" containerID="59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba" exitCode=0 Dec 03 10:00:36 crc kubenswrapper[4856]: I1203 10:00:36.859261 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerDied","Data":"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba"} Dec 03 10:00:37 crc kubenswrapper[4856]: I1203 10:00:37.872083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerStarted","Data":"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b"} Dec 03 10:00:39 crc kubenswrapper[4856]: I1203 10:00:39.333299 4856 scope.go:117] "RemoveContainer" containerID="929e71ad0ec03fe48012e7eab97f9dc572bef020013a71812a7e93386fa5c10e" Dec 03 10:00:43 crc kubenswrapper[4856]: I1203 10:00:43.200025 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:43 crc kubenswrapper[4856]: I1203 10:00:43.200565 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:43 crc kubenswrapper[4856]: I1203 10:00:43.249875 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:43 crc kubenswrapper[4856]: I1203 10:00:43.274631 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zc7h" podStartSLOduration=8.826455545 podStartE2EDuration="11.274606739s" podCreationTimestamp="2025-12-03 10:00:32 +0000 UTC" firstStartedPulling="2025-12-03 10:00:34.832728392 +0000 UTC m=+2903.015620693" lastFinishedPulling="2025-12-03 10:00:37.280879586 +0000 UTC m=+2905.463771887" observedRunningTime="2025-12-03 10:00:37.905380707 +0000 UTC m=+2906.088273008" watchObservedRunningTime="2025-12-03 10:00:43.274606739 +0000 UTC m=+2911.457499040" Dec 03 10:00:44 crc kubenswrapper[4856]: I1203 10:00:44.003519 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:44 crc kubenswrapper[4856]: I1203 10:00:44.073096 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:45 crc kubenswrapper[4856]: I1203 10:00:45.954973 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zc7h" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="registry-server" containerID="cri-o://f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b" gracePeriod=2 Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.520426 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.584921 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content\") pod \"2b71c272-003f-4102-b96c-100c2b814018\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.585178 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities\") pod \"2b71c272-003f-4102-b96c-100c2b814018\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.585241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkg2\" (UniqueName: \"kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2\") pod \"2b71c272-003f-4102-b96c-100c2b814018\" (UID: \"2b71c272-003f-4102-b96c-100c2b814018\") " Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.586008 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities" (OuterVolumeSpecName: "utilities") pod "2b71c272-003f-4102-b96c-100c2b814018" (UID: "2b71c272-003f-4102-b96c-100c2b814018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.603606 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2" (OuterVolumeSpecName: "kube-api-access-gdkg2") pod "2b71c272-003f-4102-b96c-100c2b814018" (UID: "2b71c272-003f-4102-b96c-100c2b814018"). InnerVolumeSpecName "kube-api-access-gdkg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.644785 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b71c272-003f-4102-b96c-100c2b814018" (UID: "2b71c272-003f-4102-b96c-100c2b814018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.688535 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.688574 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b71c272-003f-4102-b96c-100c2b814018-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.688587 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkg2\" (UniqueName: \"kubernetes.io/projected/2b71c272-003f-4102-b96c-100c2b814018-kube-api-access-gdkg2\") on node \"crc\" DevicePath \"\"" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.966422 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b71c272-003f-4102-b96c-100c2b814018" containerID="f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b" exitCode=0 Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.966476 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerDied","Data":"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b"} Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.966498 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zc7h" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.966516 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zc7h" event={"ID":"2b71c272-003f-4102-b96c-100c2b814018","Type":"ContainerDied","Data":"df997b4fcc520146259836d39ff5d3adfec0efc7d3ea14eb21755c07c792e946"} Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.966539 4856 scope.go:117] "RemoveContainer" containerID="f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b" Dec 03 10:00:46 crc kubenswrapper[4856]: I1203 10:00:46.995141 4856 scope.go:117] "RemoveContainer" containerID="59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.000581 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.012478 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zc7h"] Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.022991 4856 scope.go:117] "RemoveContainer" containerID="84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.070993 4856 scope.go:117] "RemoveContainer" containerID="f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b" Dec 03 10:00:47 crc kubenswrapper[4856]: E1203 10:00:47.077522 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b\": container with ID starting with f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b not found: ID does not exist" containerID="f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.077596 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b"} err="failed to get container status \"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b\": rpc error: code = NotFound desc = could not find container \"f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b\": container with ID starting with f12d788c9dd3eb36ea6472c13939d831bf0f3c6cde79a569f420c8deb572501b not found: ID does not exist" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.077636 4856 scope.go:117] "RemoveContainer" containerID="59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba" Dec 03 10:00:47 crc kubenswrapper[4856]: E1203 10:00:47.078240 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba\": container with ID starting with 59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba not found: ID does not exist" containerID="59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.078295 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba"} err="failed to get container status \"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba\": rpc error: code = NotFound desc = could not find container \"59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba\": container with ID starting with 59b3b07f2a076933d7fd5846ef61b175d9db4d400130ccc84b475a0ccd513eba not found: ID does not exist" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.078332 4856 scope.go:117] "RemoveContainer" containerID="84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49" Dec 03 10:00:47 crc kubenswrapper[4856]: E1203 10:00:47.078691 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49\": container with ID starting with 84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49 not found: ID does not exist" containerID="84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49" Dec 03 10:00:47 crc kubenswrapper[4856]: I1203 10:00:47.078730 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49"} err="failed to get container status \"84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49\": rpc error: code = NotFound desc = could not find container \"84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49\": container with ID starting with 84afd51288195525a11d5eaa961e256a9fe0ae9d53659647573e482bcc2cea49 not found: ID does not exist" Dec 03 10:00:48 crc kubenswrapper[4856]: I1203 10:00:48.704076 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b71c272-003f-4102-b96c-100c2b814018" path="/var/lib/kubelet/pods/2b71c272-003f-4102-b96c-100c2b814018/volumes" Dec 03 10:00:52 crc kubenswrapper[4856]: I1203 10:00:52.758882 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:00:52 crc kubenswrapper[4856]: I1203 10:00:52.760575 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.167687 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29412601-p5k85"] Dec 03 10:01:00 crc kubenswrapper[4856]: E1203 10:01:00.169227 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="extract-utilities" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.169249 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="extract-utilities" Dec 03 10:01:00 crc kubenswrapper[4856]: E1203 10:01:00.169263 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="extract-content" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.169272 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="extract-content" Dec 03 10:01:00 crc kubenswrapper[4856]: E1203 10:01:00.169284 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="registry-server" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.169292 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="registry-server" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.169542 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b71c272-003f-4102-b96c-100c2b814018" containerName="registry-server" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.170615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.193473 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412601-p5k85"] Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.262064 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vhh\" (UniqueName: \"kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.262410 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.262455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.262505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.366451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vhh\" (UniqueName: \"kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.367057 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.367222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.367367 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.375688 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.375922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.383007 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.388088 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vhh\" (UniqueName: \"kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh\") pod \"keystone-cron-29412601-p5k85\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:00 crc kubenswrapper[4856]: I1203 10:01:00.520349 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:01 crc kubenswrapper[4856]: I1203 10:01:01.022494 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412601-p5k85"] Dec 03 10:01:01 crc kubenswrapper[4856]: I1203 10:01:01.123265 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412601-p5k85" event={"ID":"c97ba672-751d-4c49-b856-c5b4c6ead955","Type":"ContainerStarted","Data":"0f12175bea823be54281009f1f1b76f3af63a1382281f87b9a708a390b330691"} Dec 03 10:01:02 crc kubenswrapper[4856]: I1203 10:01:02.136414 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412601-p5k85" event={"ID":"c97ba672-751d-4c49-b856-c5b4c6ead955","Type":"ContainerStarted","Data":"7bef2b7d18ab50f0bbdd3b7614f0982a74880f81bb0d392eff9b5a54a26749c2"} Dec 03 10:01:02 crc kubenswrapper[4856]: I1203 10:01:02.159116 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29412601-p5k85" podStartSLOduration=2.15909594 podStartE2EDuration="2.15909594s" podCreationTimestamp="2025-12-03 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:01:02.153201214 +0000 UTC m=+2930.336093515" watchObservedRunningTime="2025-12-03 10:01:02.15909594 +0000 UTC m=+2930.341988241" Dec 03 10:01:04 crc kubenswrapper[4856]: I1203 10:01:04.189758 4856 generic.go:334] "Generic (PLEG): container finished" podID="c97ba672-751d-4c49-b856-c5b4c6ead955" containerID="7bef2b7d18ab50f0bbdd3b7614f0982a74880f81bb0d392eff9b5a54a26749c2" exitCode=0 Dec 03 10:01:04 crc kubenswrapper[4856]: I1203 10:01:04.189836 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412601-p5k85" event={"ID":"c97ba672-751d-4c49-b856-c5b4c6ead955","Type":"ContainerDied","Data":"7bef2b7d18ab50f0bbdd3b7614f0982a74880f81bb0d392eff9b5a54a26749c2"} Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.786046 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.972574 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys\") pod \"c97ba672-751d-4c49-b856-c5b4c6ead955\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.972829 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle\") pod \"c97ba672-751d-4c49-b856-c5b4c6ead955\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.972917 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data\") pod \"c97ba672-751d-4c49-b856-c5b4c6ead955\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.972982 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64vhh\" (UniqueName: \"kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh\") pod \"c97ba672-751d-4c49-b856-c5b4c6ead955\" (UID: \"c97ba672-751d-4c49-b856-c5b4c6ead955\") " Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.981566 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh" (OuterVolumeSpecName: "kube-api-access-64vhh") pod "c97ba672-751d-4c49-b856-c5b4c6ead955" (UID: "c97ba672-751d-4c49-b856-c5b4c6ead955"). InnerVolumeSpecName "kube-api-access-64vhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:01:05 crc kubenswrapper[4856]: I1203 10:01:05.983112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c97ba672-751d-4c49-b856-c5b4c6ead955" (UID: "c97ba672-751d-4c49-b856-c5b4c6ead955"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.008187 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c97ba672-751d-4c49-b856-c5b4c6ead955" (UID: "c97ba672-751d-4c49-b856-c5b4c6ead955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.046865 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data" (OuterVolumeSpecName: "config-data") pod "c97ba672-751d-4c49-b856-c5b4c6ead955" (UID: "c97ba672-751d-4c49-b856-c5b4c6ead955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.075795 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.075847 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.075860 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97ba672-751d-4c49-b856-c5b4c6ead955-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.075869 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64vhh\" (UniqueName: \"kubernetes.io/projected/c97ba672-751d-4c49-b856-c5b4c6ead955-kube-api-access-64vhh\") on node \"crc\" DevicePath \"\"" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.211527 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412601-p5k85" event={"ID":"c97ba672-751d-4c49-b856-c5b4c6ead955","Type":"ContainerDied","Data":"0f12175bea823be54281009f1f1b76f3af63a1382281f87b9a708a390b330691"} Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.211584 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412601-p5k85" Dec 03 10:01:06 crc kubenswrapper[4856]: I1203 10:01:06.211591 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f12175bea823be54281009f1f1b76f3af63a1382281f87b9a708a390b330691" Dec 03 10:01:22 crc kubenswrapper[4856]: I1203 10:01:22.760595 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:01:22 crc kubenswrapper[4856]: I1203 10:01:22.761594 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:01:22 crc kubenswrapper[4856]: I1203 10:01:22.761716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:01:22 crc kubenswrapper[4856]: I1203 10:01:22.763259 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:01:22 crc kubenswrapper[4856]: I1203 10:01:22.763371 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72" gracePeriod=600 Dec 03 10:01:23 crc kubenswrapper[4856]: I1203 10:01:23.395193 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72" exitCode=0 Dec 03 10:01:23 crc kubenswrapper[4856]: I1203 10:01:23.395271 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72"} Dec 03 10:01:23 crc kubenswrapper[4856]: I1203 10:01:23.395658 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20"} Dec 03 10:01:23 crc kubenswrapper[4856]: I1203 10:01:23.395681 4856 scope.go:117] "RemoveContainer" containerID="09f26fc461491bc414dfa33c31c23c5c93d8d3d7033b19ac6a6b2cc7123cdab4" Dec 03 10:03:52 crc kubenswrapper[4856]: I1203 10:03:52.758560 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:03:52 crc kubenswrapper[4856]: I1203 10:03:52.759120 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:04:22 crc kubenswrapper[4856]: I1203 10:04:22.759318 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:04:22 crc kubenswrapper[4856]: I1203 10:04:22.759998 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.758985 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.759593 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.759650 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.760685 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.760758 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" gracePeriod=600 Dec 03 10:04:52 crc kubenswrapper[4856]: E1203 10:04:52.887282 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.911187 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" exitCode=0 Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.911259 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20"} Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.911342 4856 scope.go:117] "RemoveContainer" containerID="71dba7a694c0a887a30c65f12fbc402c4df7114c113294479491eff356529a72" Dec 03 10:04:52 crc kubenswrapper[4856]: I1203 10:04:52.912116 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:04:52 crc kubenswrapper[4856]: E1203 10:04:52.912418 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:05:03 crc kubenswrapper[4856]: I1203 10:05:03.688747 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:05:03 crc kubenswrapper[4856]: E1203 10:05:03.689572 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.939471 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:16 crc kubenswrapper[4856]: E1203 10:05:16.942257 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97ba672-751d-4c49-b856-c5b4c6ead955" containerName="keystone-cron" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.942391 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97ba672-751d-4c49-b856-c5b4c6ead955" containerName="keystone-cron" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.942769 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97ba672-751d-4c49-b856-c5b4c6ead955" containerName="keystone-cron" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.944400 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.960005 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.970994 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l55j9\" (UniqueName: \"kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.971066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:16 crc kubenswrapper[4856]: I1203 10:05:16.971101 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.073379 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.073767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.074044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l55j9\" (UniqueName: \"kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.074129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.074129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.098068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l55j9\" (UniqueName: \"kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9\") pod \"certified-operators-mfbjl\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.264873 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.690344 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:05:17 crc kubenswrapper[4856]: E1203 10:05:17.691176 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:05:17 crc kubenswrapper[4856]: I1203 10:05:17.893529 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:18 crc kubenswrapper[4856]: I1203 10:05:18.201714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerStarted","Data":"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486"} Dec 03 10:05:18 crc kubenswrapper[4856]: I1203 10:05:18.203453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerStarted","Data":"f699e05f6f2bc391431e21e8179a05e9fdb44a51584faf323238e86f7afcbca3"} Dec 03 10:05:19 crc kubenswrapper[4856]: I1203 10:05:19.219272 4856 generic.go:334] "Generic (PLEG): container finished" podID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerID="488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486" exitCode=0 Dec 03 10:05:19 crc kubenswrapper[4856]: I1203 10:05:19.219384 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerDied","Data":"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486"} Dec 03 10:05:19 crc kubenswrapper[4856]: I1203 10:05:19.219888 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerStarted","Data":"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824"} Dec 03 10:05:20 crc kubenswrapper[4856]: I1203 10:05:20.236932 4856 generic.go:334] "Generic (PLEG): container finished" podID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerID="66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824" exitCode=0 Dec 03 10:05:20 crc kubenswrapper[4856]: I1203 10:05:20.237053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerDied","Data":"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824"} Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.149907 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.179268 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.198018 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.273838 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerStarted","Data":"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b"} Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.300261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqxcv\" (UniqueName: \"kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.300485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.300508 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.300722 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mfbjl" podStartSLOduration=2.842702853 podStartE2EDuration="5.300707711s" podCreationTimestamp="2025-12-03 10:05:16 +0000 UTC" firstStartedPulling="2025-12-03 10:05:18.203775543 +0000 UTC m=+3186.386667844" lastFinishedPulling="2025-12-03 10:05:20.661780391 +0000 UTC m=+3188.844672702" observedRunningTime="2025-12-03 10:05:21.298382983 +0000 UTC m=+3189.481275284" watchObservedRunningTime="2025-12-03 10:05:21.300707711 +0000 UTC m=+3189.483600012" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.401663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.401733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.401768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqxcv\" (UniqueName: \"kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.402916 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.402971 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.425926 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqxcv\" (UniqueName: \"kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv\") pod \"redhat-marketplace-dgmr7\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.541378 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.757540 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.760641 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.781520 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.811363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.811496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.811556 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dmj\" (UniqueName: \"kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.914330 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.914450 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.914492 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dmj\" (UniqueName: \"kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.915280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.915512 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:21 crc kubenswrapper[4856]: I1203 10:05:21.940046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dmj\" (UniqueName: \"kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj\") pod \"redhat-operators-xd4bh\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:22 crc kubenswrapper[4856]: I1203 10:05:22.071771 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:22 crc kubenswrapper[4856]: I1203 10:05:22.093155 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:22 crc kubenswrapper[4856]: I1203 10:05:22.327477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerStarted","Data":"64ee106b3caf42d9a45da5d486cfae68cb05d03f6b9f7a47c661e73761e96c43"} Dec 03 10:05:22 crc kubenswrapper[4856]: I1203 10:05:22.619651 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:23 crc kubenswrapper[4856]: I1203 10:05:23.339448 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerID="f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba" exitCode=0 Dec 03 10:05:23 crc kubenswrapper[4856]: I1203 10:05:23.339563 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerDied","Data":"f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba"} Dec 03 10:05:23 crc kubenswrapper[4856]: I1203 10:05:23.343761 4856 generic.go:334] "Generic (PLEG): container finished" podID="b926551e-ee82-4084-84ff-b924cadf39b1" containerID="79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285" exitCode=0 Dec 03 10:05:23 crc kubenswrapper[4856]: I1203 10:05:23.344048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerDied","Data":"79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285"} Dec 03 10:05:23 crc kubenswrapper[4856]: I1203 10:05:23.344136 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerStarted","Data":"ee9c612b533e88dc329dc9a3a873c19a4365c0c328750da1b84ce1ee765f5cdb"} Dec 03 10:05:24 crc kubenswrapper[4856]: I1203 10:05:24.357886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerStarted","Data":"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d"} Dec 03 10:05:24 crc kubenswrapper[4856]: I1203 10:05:24.362184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerStarted","Data":"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea"} Dec 03 10:05:25 crc kubenswrapper[4856]: I1203 10:05:25.381957 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerID="00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d" exitCode=0 Dec 03 10:05:25 crc kubenswrapper[4856]: I1203 10:05:25.382105 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerDied","Data":"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d"} Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.265235 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.265706 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.325796 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.402681 4856 generic.go:334] "Generic (PLEG): container finished" podID="b926551e-ee82-4084-84ff-b924cadf39b1" containerID="b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea" exitCode=0 Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.402790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerDied","Data":"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea"} Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.418350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerStarted","Data":"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f"} Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.465494 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dgmr7" podStartSLOduration=3.719702775 podStartE2EDuration="6.465458222s" podCreationTimestamp="2025-12-03 10:05:21 +0000 UTC" firstStartedPulling="2025-12-03 10:05:23.342064456 +0000 UTC m=+3191.524956757" lastFinishedPulling="2025-12-03 10:05:26.087819903 +0000 UTC m=+3194.270712204" observedRunningTime="2025-12-03 10:05:27.44832419 +0000 UTC m=+3195.631216511" watchObservedRunningTime="2025-12-03 10:05:27.465458222 +0000 UTC m=+3195.648350523" Dec 03 10:05:27 crc kubenswrapper[4856]: I1203 10:05:27.477024 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:28 crc kubenswrapper[4856]: I1203 10:05:28.430306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerStarted","Data":"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22"} Dec 03 10:05:28 crc kubenswrapper[4856]: I1203 10:05:28.467380 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xd4bh" podStartSLOduration=2.696169581 podStartE2EDuration="7.467356891s" podCreationTimestamp="2025-12-03 10:05:21 +0000 UTC" firstStartedPulling="2025-12-03 10:05:23.347496993 +0000 UTC m=+3191.530389294" lastFinishedPulling="2025-12-03 10:05:28.118684293 +0000 UTC m=+3196.301576604" observedRunningTime="2025-12-03 10:05:28.458485157 +0000 UTC m=+3196.641377478" watchObservedRunningTime="2025-12-03 10:05:28.467356891 +0000 UTC m=+3196.650249192" Dec 03 10:05:30 crc kubenswrapper[4856]: I1203 10:05:30.722900 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:30 crc kubenswrapper[4856]: I1203 10:05:30.723466 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mfbjl" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="registry-server" containerID="cri-o://4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b" gracePeriod=2 Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.275380 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.291767 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content\") pod \"092f1a66-40b6-4a06-81a7-7a102bf587cd\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.291931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities\") pod \"092f1a66-40b6-4a06-81a7-7a102bf587cd\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.292029 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l55j9\" (UniqueName: \"kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9\") pod \"092f1a66-40b6-4a06-81a7-7a102bf587cd\" (UID: \"092f1a66-40b6-4a06-81a7-7a102bf587cd\") " Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.294623 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities" (OuterVolumeSpecName: "utilities") pod "092f1a66-40b6-4a06-81a7-7a102bf587cd" (UID: "092f1a66-40b6-4a06-81a7-7a102bf587cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.306738 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9" (OuterVolumeSpecName: "kube-api-access-l55j9") pod "092f1a66-40b6-4a06-81a7-7a102bf587cd" (UID: "092f1a66-40b6-4a06-81a7-7a102bf587cd"). InnerVolumeSpecName "kube-api-access-l55j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.368436 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "092f1a66-40b6-4a06-81a7-7a102bf587cd" (UID: "092f1a66-40b6-4a06-81a7-7a102bf587cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.393718 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l55j9\" (UniqueName: \"kubernetes.io/projected/092f1a66-40b6-4a06-81a7-7a102bf587cd-kube-api-access-l55j9\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.393763 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.393773 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092f1a66-40b6-4a06-81a7-7a102bf587cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.507744 4856 generic.go:334] "Generic (PLEG): container finished" podID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerID="4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b" exitCode=0 Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.507843 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerDied","Data":"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b"} Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.507895 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfbjl" event={"ID":"092f1a66-40b6-4a06-81a7-7a102bf587cd","Type":"ContainerDied","Data":"f699e05f6f2bc391431e21e8179a05e9fdb44a51584faf323238e86f7afcbca3"} Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.507917 4856 scope.go:117] "RemoveContainer" containerID="4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.508162 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfbjl" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.542438 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.543579 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.558663 4856 scope.go:117] "RemoveContainer" containerID="66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.599354 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.602719 4856 scope.go:117] "RemoveContainer" containerID="488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.611517 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:31 crc kubenswrapper[4856]: E1203 10:05:31.628415 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod092f1a66_40b6_4a06_81a7_7a102bf587cd.slice\": RecentStats: unable to find data in memory cache]" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.641456 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mfbjl"] Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.647390 4856 scope.go:117] "RemoveContainer" containerID="4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b" Dec 03 10:05:31 crc kubenswrapper[4856]: E1203 10:05:31.650374 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b\": container with ID starting with 4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b not found: ID does not exist" containerID="4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.650454 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b"} err="failed to get container status \"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b\": rpc error: code = NotFound desc = could not find container \"4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b\": container with ID starting with 4e059978c5a48ab7b5367abbd925bf6291f3f11d04bcea4ec863760a5299dc3b not found: ID does not exist" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.650494 4856 scope.go:117] "RemoveContainer" containerID="66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824" Dec 03 10:05:31 crc kubenswrapper[4856]: E1203 10:05:31.650981 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824\": container with ID starting with 66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824 not found: ID does not exist" containerID="66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.651005 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824"} err="failed to get container status \"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824\": rpc error: code = NotFound desc = could not find container \"66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824\": container with ID starting with 66b5dda73f25a8edfdf10329f69ac49d0708cbd6301430236f8c0d730891a824 not found: ID does not exist" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.651020 4856 scope.go:117] "RemoveContainer" containerID="488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486" Dec 03 10:05:31 crc kubenswrapper[4856]: E1203 10:05:31.651463 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486\": container with ID starting with 488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486 not found: ID does not exist" containerID="488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.651490 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486"} err="failed to get container status \"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486\": rpc error: code = NotFound desc = could not find container \"488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486\": container with ID starting with 488d82891b3ba15100d1802a49f792f59bbed1fa717cc429cd00b571ba3f6486 not found: ID does not exist" Dec 03 10:05:31 crc kubenswrapper[4856]: I1203 10:05:31.691369 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:05:31 crc kubenswrapper[4856]: E1203 10:05:31.691582 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:05:32 crc kubenswrapper[4856]: I1203 10:05:32.093725 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:32 crc kubenswrapper[4856]: I1203 10:05:32.094101 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:32 crc kubenswrapper[4856]: I1203 10:05:32.584066 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:32 crc kubenswrapper[4856]: I1203 10:05:32.702910 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" path="/var/lib/kubelet/pods/092f1a66-40b6-4a06-81a7-7a102bf587cd/volumes" Dec 03 10:05:33 crc kubenswrapper[4856]: I1203 10:05:33.144805 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xd4bh" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="registry-server" probeResult="failure" output=< Dec 03 10:05:33 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Dec 03 10:05:33 crc kubenswrapper[4856]: > Dec 03 10:05:35 crc kubenswrapper[4856]: I1203 10:05:35.120027 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:35 crc kubenswrapper[4856]: I1203 10:05:35.565477 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dgmr7" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="registry-server" containerID="cri-o://3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f" gracePeriod=2 Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.099288 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.169973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities\") pod \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.170021 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content\") pod \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.170075 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqxcv\" (UniqueName: \"kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv\") pod \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\" (UID: \"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e\") " Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.170770 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities" (OuterVolumeSpecName: "utilities") pod "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" (UID: "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.178808 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv" (OuterVolumeSpecName: "kube-api-access-mqxcv") pod "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" (UID: "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e"). InnerVolumeSpecName "kube-api-access-mqxcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.191508 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" (UID: "9c3da3ab-b931-4ede-89f8-82f3a61a9f5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.272949 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.272989 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.273005 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqxcv\" (UniqueName: \"kubernetes.io/projected/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e-kube-api-access-mqxcv\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.580370 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerID="3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f" exitCode=0 Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.580419 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerDied","Data":"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f"} Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.580453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgmr7" event={"ID":"9c3da3ab-b931-4ede-89f8-82f3a61a9f5e","Type":"ContainerDied","Data":"64ee106b3caf42d9a45da5d486cfae68cb05d03f6b9f7a47c661e73761e96c43"} Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.580464 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgmr7" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.580473 4856 scope.go:117] "RemoveContainer" containerID="3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.626979 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.630444 4856 scope.go:117] "RemoveContainer" containerID="00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.642893 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgmr7"] Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.657248 4856 scope.go:117] "RemoveContainer" containerID="f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.700277 4856 scope.go:117] "RemoveContainer" containerID="3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f" Dec 03 10:05:36 crc kubenswrapper[4856]: E1203 10:05:36.700749 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f\": container with ID starting with 3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f not found: ID does not exist" containerID="3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.700793 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f"} err="failed to get container status \"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f\": rpc error: code = NotFound desc = could not find container \"3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f\": container with ID starting with 3e84f6da7fba8735b0b5312a37817aadc39124f58b05d71a0f418f9348ab5a4f not found: ID does not exist" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.700834 4856 scope.go:117] "RemoveContainer" containerID="00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d" Dec 03 10:05:36 crc kubenswrapper[4856]: E1203 10:05:36.701468 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d\": container with ID starting with 00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d not found: ID does not exist" containerID="00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.701565 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d"} err="failed to get container status \"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d\": rpc error: code = NotFound desc = could not find container \"00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d\": container with ID starting with 00a5ebf9288c39b5c749faaf66dd56ba6a5e7df95a476abc1b6527157003a35d not found: ID does not exist" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.701628 4856 scope.go:117] "RemoveContainer" containerID="f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba" Dec 03 10:05:36 crc kubenswrapper[4856]: E1203 10:05:36.702043 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba\": container with ID starting with f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba not found: ID does not exist" containerID="f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.702091 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba"} err="failed to get container status \"f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba\": rpc error: code = NotFound desc = could not find container \"f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba\": container with ID starting with f31fba056af7d3519cfd645f29974a58c79067f55997194ec07dac4e94d2bcba not found: ID does not exist" Dec 03 10:05:36 crc kubenswrapper[4856]: I1203 10:05:36.703653 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" path="/var/lib/kubelet/pods/9c3da3ab-b931-4ede-89f8-82f3a61a9f5e/volumes" Dec 03 10:05:42 crc kubenswrapper[4856]: I1203 10:05:42.151593 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:42 crc kubenswrapper[4856]: I1203 10:05:42.207220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:42 crc kubenswrapper[4856]: I1203 10:05:42.391993 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:43 crc kubenswrapper[4856]: I1203 10:05:43.660705 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xd4bh" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="registry-server" containerID="cri-o://ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22" gracePeriod=2 Dec 03 10:05:43 crc kubenswrapper[4856]: I1203 10:05:43.689449 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:05:43 crc kubenswrapper[4856]: E1203 10:05:43.689738 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.214009 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.372732 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2dmj\" (UniqueName: \"kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj\") pod \"b926551e-ee82-4084-84ff-b924cadf39b1\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.372840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities\") pod \"b926551e-ee82-4084-84ff-b924cadf39b1\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.373066 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content\") pod \"b926551e-ee82-4084-84ff-b924cadf39b1\" (UID: \"b926551e-ee82-4084-84ff-b924cadf39b1\") " Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.374395 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities" (OuterVolumeSpecName: "utilities") pod "b926551e-ee82-4084-84ff-b924cadf39b1" (UID: "b926551e-ee82-4084-84ff-b924cadf39b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.381066 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj" (OuterVolumeSpecName: "kube-api-access-n2dmj") pod "b926551e-ee82-4084-84ff-b924cadf39b1" (UID: "b926551e-ee82-4084-84ff-b924cadf39b1"). InnerVolumeSpecName "kube-api-access-n2dmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.477538 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2dmj\" (UniqueName: \"kubernetes.io/projected/b926551e-ee82-4084-84ff-b924cadf39b1-kube-api-access-n2dmj\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.478472 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.500662 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b926551e-ee82-4084-84ff-b924cadf39b1" (UID: "b926551e-ee82-4084-84ff-b924cadf39b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.582305 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b926551e-ee82-4084-84ff-b924cadf39b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.673738 4856 generic.go:334] "Generic (PLEG): container finished" podID="b926551e-ee82-4084-84ff-b924cadf39b1" containerID="ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22" exitCode=0 Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.673834 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerDied","Data":"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22"} Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.674114 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xd4bh" event={"ID":"b926551e-ee82-4084-84ff-b924cadf39b1","Type":"ContainerDied","Data":"ee9c612b533e88dc329dc9a3a873c19a4365c0c328750da1b84ce1ee765f5cdb"} Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.674143 4856 scope.go:117] "RemoveContainer" containerID="ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.673870 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xd4bh" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.698443 4856 scope.go:117] "RemoveContainer" containerID="b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.717506 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.726899 4856 scope.go:117] "RemoveContainer" containerID="79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.727468 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xd4bh"] Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.788183 4856 scope.go:117] "RemoveContainer" containerID="ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22" Dec 03 10:05:44 crc kubenswrapper[4856]: E1203 10:05:44.790144 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22\": container with ID starting with ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22 not found: ID does not exist" containerID="ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.790187 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22"} err="failed to get container status \"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22\": rpc error: code = NotFound desc = could not find container \"ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22\": container with ID starting with ffabad5ea1bc08a6d8be7b31319c8c6bd43bad6b66cc14d70313036e2f390a22 not found: ID does not exist" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.790216 4856 scope.go:117] "RemoveContainer" containerID="b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea" Dec 03 10:05:44 crc kubenswrapper[4856]: E1203 10:05:44.790847 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea\": container with ID starting with b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea not found: ID does not exist" containerID="b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.790874 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea"} err="failed to get container status \"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea\": rpc error: code = NotFound desc = could not find container \"b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea\": container with ID starting with b95ca76037e9a9f8e44e2ca9a8157a7b6afb256463c2cead819f2fd2d73306ea not found: ID does not exist" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.790894 4856 scope.go:117] "RemoveContainer" containerID="79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285" Dec 03 10:05:44 crc kubenswrapper[4856]: E1203 10:05:44.791161 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285\": container with ID starting with 79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285 not found: ID does not exist" containerID="79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285" Dec 03 10:05:44 crc kubenswrapper[4856]: I1203 10:05:44.791181 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285"} err="failed to get container status \"79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285\": rpc error: code = NotFound desc = could not find container \"79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285\": container with ID starting with 79e1403e4784d45816e80a1e79451dbf91165d202ce8a66c6475da78a4365285 not found: ID does not exist" Dec 03 10:05:46 crc kubenswrapper[4856]: I1203 10:05:46.701908 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" path="/var/lib/kubelet/pods/b926551e-ee82-4084-84ff-b924cadf39b1/volumes" Dec 03 10:05:57 crc kubenswrapper[4856]: I1203 10:05:57.689092 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:05:57 crc kubenswrapper[4856]: E1203 10:05:57.690507 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:06:12 crc kubenswrapper[4856]: I1203 10:06:12.697076 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:06:12 crc kubenswrapper[4856]: E1203 10:06:12.698272 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:06:24 crc kubenswrapper[4856]: I1203 10:06:24.693266 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:06:24 crc kubenswrapper[4856]: E1203 10:06:24.694242 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:06:36 crc kubenswrapper[4856]: I1203 10:06:36.689478 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:06:36 crc kubenswrapper[4856]: E1203 10:06:36.690533 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:06:47 crc kubenswrapper[4856]: I1203 10:06:47.689580 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:06:47 crc kubenswrapper[4856]: E1203 10:06:47.692220 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:07:01 crc kubenswrapper[4856]: I1203 10:07:01.689691 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:07:01 crc kubenswrapper[4856]: E1203 10:07:01.690547 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:07:16 crc kubenswrapper[4856]: I1203 10:07:16.689340 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:07:16 crc kubenswrapper[4856]: E1203 10:07:16.690084 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:07:28 crc kubenswrapper[4856]: I1203 10:07:28.689889 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:07:28 crc kubenswrapper[4856]: E1203 10:07:28.690665 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:07:41 crc kubenswrapper[4856]: I1203 10:07:41.689881 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:07:41 crc kubenswrapper[4856]: E1203 10:07:41.690932 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:07:53 crc kubenswrapper[4856]: I1203 10:07:53.688975 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:07:53 crc kubenswrapper[4856]: E1203 10:07:53.690849 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:08:06 crc kubenswrapper[4856]: I1203 10:08:06.689513 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:08:06 crc kubenswrapper[4856]: E1203 10:08:06.690433 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:08:18 crc kubenswrapper[4856]: I1203 10:08:18.691891 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:08:18 crc kubenswrapper[4856]: E1203 10:08:18.693373 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:08:33 crc kubenswrapper[4856]: I1203 10:08:33.689652 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:08:33 crc kubenswrapper[4856]: E1203 10:08:33.690584 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:08:45 crc kubenswrapper[4856]: I1203 10:08:45.688603 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:08:45 crc kubenswrapper[4856]: E1203 10:08:45.689440 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:09:00 crc kubenswrapper[4856]: I1203 10:09:00.689390 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:09:00 crc kubenswrapper[4856]: E1203 10:09:00.690277 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:09:15 crc kubenswrapper[4856]: I1203 10:09:15.689230 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:09:15 crc kubenswrapper[4856]: E1203 10:09:15.692017 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:09:27 crc kubenswrapper[4856]: I1203 10:09:27.689112 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:09:27 crc kubenswrapper[4856]: E1203 10:09:27.690039 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:09:38 crc kubenswrapper[4856]: I1203 10:09:38.690439 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:09:38 crc kubenswrapper[4856]: E1203 10:09:38.691348 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:09:53 crc kubenswrapper[4856]: I1203 10:09:53.689865 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:09:54 crc kubenswrapper[4856]: I1203 10:09:54.322343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709"} Dec 03 10:10:56 crc kubenswrapper[4856]: I1203 10:10:56.049521 4856 generic.go:334] "Generic (PLEG): container finished" podID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" containerID="ed65c87387fc6a0f30a85b4d7e6362bfd1a95de33163395a77a899c74898069a" exitCode=0 Dec 03 10:10:56 crc kubenswrapper[4856]: I1203 10:10:56.049629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"105b5a9b-c81b-43d5-bea0-7bfd062ed807","Type":"ContainerDied","Data":"ed65c87387fc6a0f30a85b4d7e6362bfd1a95de33163395a77a899c74898069a"} Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.471036 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.609811 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.609920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610123 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrqn\" (UniqueName: \"kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610193 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610229 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610292 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610423 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.610498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary\") pod \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\" (UID: \"105b5a9b-c81b-43d5-bea0-7bfd062ed807\") " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.611945 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data" (OuterVolumeSpecName: "config-data") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.613113 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.620427 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn" (OuterVolumeSpecName: "kube-api-access-tgrqn") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "kube-api-access-tgrqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.622525 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.625962 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.641533 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.668258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.676766 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.677095 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "105b5a9b-c81b-43d5-bea0-7bfd062ed807" (UID: "105b5a9b-c81b-43d5-bea0-7bfd062ed807"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.715207 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgrqn\" (UniqueName: \"kubernetes.io/projected/105b5a9b-c81b-43d5-bea0-7bfd062ed807-kube-api-access-tgrqn\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.715407 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.715526 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.715704 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.731228 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.731530 4856 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.731633 4856 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/105b5a9b-c81b-43d5-bea0-7bfd062ed807-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.731729 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/105b5a9b-c81b-43d5-bea0-7bfd062ed807-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.731851 4856 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/105b5a9b-c81b-43d5-bea0-7bfd062ed807-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.745533 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 03 10:10:57 crc kubenswrapper[4856]: I1203 10:10:57.833616 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 03 10:10:58 crc kubenswrapper[4856]: I1203 10:10:58.075529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"105b5a9b-c81b-43d5-bea0-7bfd062ed807","Type":"ContainerDied","Data":"70147358a1716374966221e86cd28c4e121e663a7887e1c5f3b08fed46f0b335"} Dec 03 10:10:58 crc kubenswrapper[4856]: I1203 10:10:58.075582 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70147358a1716374966221e86cd28c4e121e663a7887e1c5f3b08fed46f0b335" Dec 03 10:10:58 crc kubenswrapper[4856]: I1203 10:10:58.075626 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.074230 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075166 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075181 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075205 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075212 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075224 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075232 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075252 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" containerName="tempest-tests-tempest-tests-runner" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075258 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" containerName="tempest-tests-tempest-tests-runner" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075268 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075276 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075286 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075291 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075305 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075311 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075325 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075331 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075353 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075359 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="extract-utilities" Dec 03 10:11:00 crc kubenswrapper[4856]: E1203 10:11:00.075375 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075381 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="extract-content" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075555 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3da3ab-b931-4ede-89f8-82f3a61a9f5e" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075571 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b926551e-ee82-4084-84ff-b924cadf39b1" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075587 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="092f1a66-40b6-4a06-81a7-7a102bf587cd" containerName="registry-server" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.075626 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="105b5a9b-c81b-43d5-bea0-7bfd062ed807" containerName="tempest-tests-tempest-tests-runner" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.076323 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.091702 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x2zqx" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.097412 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.188342 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.188498 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtpf\" (UniqueName: \"kubernetes.io/projected/9cbd1b10-8bcd-4759-8c6a-2db25e06eadd-kube-api-access-wxtpf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.291030 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxtpf\" (UniqueName: \"kubernetes.io/projected/9cbd1b10-8bcd-4759-8c6a-2db25e06eadd-kube-api-access-wxtpf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.291209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.291993 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.335478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxtpf\" (UniqueName: \"kubernetes.io/projected/9cbd1b10-8bcd-4759-8c6a-2db25e06eadd-kube-api-access-wxtpf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.349121 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.433327 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.919734 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 10:11:00 crc kubenswrapper[4856]: I1203 10:11:00.931933 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 10:11:01 crc kubenswrapper[4856]: I1203 10:11:01.118056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd","Type":"ContainerStarted","Data":"86eccfbc8e722d53a0a5966e5bb73f2154ef4633a89a98e545af565d7f93fd8c"} Dec 03 10:11:02 crc kubenswrapper[4856]: I1203 10:11:02.138375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9cbd1b10-8bcd-4759-8c6a-2db25e06eadd","Type":"ContainerStarted","Data":"ececd46e209b72c2a5b97143f406258d225518963524bf7213e617778b62f7f1"} Dec 03 10:11:02 crc kubenswrapper[4856]: I1203 10:11:02.174159 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.234751089 podStartE2EDuration="2.174132959s" podCreationTimestamp="2025-12-03 10:11:00 +0000 UTC" firstStartedPulling="2025-12-03 10:11:00.931635031 +0000 UTC m=+3529.114527332" lastFinishedPulling="2025-12-03 10:11:01.871016901 +0000 UTC m=+3530.053909202" observedRunningTime="2025-12-03 10:11:02.169099372 +0000 UTC m=+3530.351991673" watchObservedRunningTime="2025-12-03 10:11:02.174132959 +0000 UTC m=+3530.357025260" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.205500 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t5pcp/must-gather-krrpp"] Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.208685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.216341 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t5pcp/must-gather-krrpp"] Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.231223 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t5pcp"/"openshift-service-ca.crt" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.231415 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t5pcp"/"kube-root-ca.crt" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.231524 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t5pcp"/"default-dockercfg-9gp4g" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.346487 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnw82\" (UniqueName: \"kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.346553 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.448862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnw82\" (UniqueName: \"kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.448977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.449470 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.473782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnw82\" (UniqueName: \"kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82\") pod \"must-gather-krrpp\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:26 crc kubenswrapper[4856]: I1203 10:11:26.552788 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:11:27 crc kubenswrapper[4856]: I1203 10:11:27.041253 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t5pcp/must-gather-krrpp"] Dec 03 10:11:27 crc kubenswrapper[4856]: I1203 10:11:27.428318 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/must-gather-krrpp" event={"ID":"569cca77-4758-410c-bf47-6b235c903dd0","Type":"ContainerStarted","Data":"f08e82cc1eec9c0f4802d78c3a3677ac9fc17bab556f84946ab7d999d0247063"} Dec 03 10:11:31 crc kubenswrapper[4856]: I1203 10:11:31.500377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/must-gather-krrpp" event={"ID":"569cca77-4758-410c-bf47-6b235c903dd0","Type":"ContainerStarted","Data":"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f"} Dec 03 10:11:32 crc kubenswrapper[4856]: I1203 10:11:32.512252 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/must-gather-krrpp" event={"ID":"569cca77-4758-410c-bf47-6b235c903dd0","Type":"ContainerStarted","Data":"ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852"} Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.097115 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t5pcp/must-gather-krrpp" podStartSLOduration=5.078000404 podStartE2EDuration="9.097090278s" podCreationTimestamp="2025-12-03 10:11:26 +0000 UTC" firstStartedPulling="2025-12-03 10:11:27.037922709 +0000 UTC m=+3555.220815020" lastFinishedPulling="2025-12-03 10:11:31.057012593 +0000 UTC m=+3559.239904894" observedRunningTime="2025-12-03 10:11:32.538594974 +0000 UTC m=+3560.721487285" watchObservedRunningTime="2025-12-03 10:11:35.097090278 +0000 UTC m=+3563.279982579" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.111261 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-gmfsq"] Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.112973 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.252342 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4sq\" (UniqueName: \"kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.252525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.355323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4sq\" (UniqueName: \"kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.355932 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.355793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.383848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4sq\" (UniqueName: \"kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq\") pod \"crc-debug-gmfsq\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.435736 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:11:35 crc kubenswrapper[4856]: I1203 10:11:35.551285 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" event={"ID":"030aa939-13bc-458d-b44e-b6ffe40d5853","Type":"ContainerStarted","Data":"d8d3492bbfa0978b3b1e39640fd44ea8a7097c0cf080a966b4af6ef45d27cc88"} Dec 03 10:11:47 crc kubenswrapper[4856]: I1203 10:11:47.686444 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" event={"ID":"030aa939-13bc-458d-b44e-b6ffe40d5853","Type":"ContainerStarted","Data":"0d0a210afd3f92286c1de291019d5e03ab658d5657ac5ee970696e5eaeb30df0"} Dec 03 10:11:47 crc kubenswrapper[4856]: I1203 10:11:47.708545 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" podStartSLOduration=1.16138525 podStartE2EDuration="12.708517232s" podCreationTimestamp="2025-12-03 10:11:35 +0000 UTC" firstStartedPulling="2025-12-03 10:11:35.511297318 +0000 UTC m=+3563.694189619" lastFinishedPulling="2025-12-03 10:11:47.05842928 +0000 UTC m=+3575.241321601" observedRunningTime="2025-12-03 10:11:47.707939348 +0000 UTC m=+3575.890831649" watchObservedRunningTime="2025-12-03 10:11:47.708517232 +0000 UTC m=+3575.891409553" Dec 03 10:12:22 crc kubenswrapper[4856]: I1203 10:12:22.758558 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:12:22 crc kubenswrapper[4856]: I1203 10:12:22.759427 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:12:24 crc kubenswrapper[4856]: I1203 10:12:24.033445 4856 generic.go:334] "Generic (PLEG): container finished" podID="030aa939-13bc-458d-b44e-b6ffe40d5853" containerID="0d0a210afd3f92286c1de291019d5e03ab658d5657ac5ee970696e5eaeb30df0" exitCode=0 Dec 03 10:12:24 crc kubenswrapper[4856]: I1203 10:12:24.033600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" event={"ID":"030aa939-13bc-458d-b44e-b6ffe40d5853","Type":"ContainerDied","Data":"0d0a210afd3f92286c1de291019d5e03ab658d5657ac5ee970696e5eaeb30df0"} Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.171920 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.207955 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-gmfsq"] Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.218129 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-gmfsq"] Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.309518 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4sq\" (UniqueName: \"kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq\") pod \"030aa939-13bc-458d-b44e-b6ffe40d5853\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.309711 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host\") pod \"030aa939-13bc-458d-b44e-b6ffe40d5853\" (UID: \"030aa939-13bc-458d-b44e-b6ffe40d5853\") " Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.309889 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host" (OuterVolumeSpecName: "host") pod "030aa939-13bc-458d-b44e-b6ffe40d5853" (UID: "030aa939-13bc-458d-b44e-b6ffe40d5853"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.310380 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030aa939-13bc-458d-b44e-b6ffe40d5853-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.329001 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq" (OuterVolumeSpecName: "kube-api-access-xn4sq") pod "030aa939-13bc-458d-b44e-b6ffe40d5853" (UID: "030aa939-13bc-458d-b44e-b6ffe40d5853"). InnerVolumeSpecName "kube-api-access-xn4sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:12:25 crc kubenswrapper[4856]: I1203 10:12:25.412619 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn4sq\" (UniqueName: \"kubernetes.io/projected/030aa939-13bc-458d-b44e-b6ffe40d5853-kube-api-access-xn4sq\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.057310 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d3492bbfa0978b3b1e39640fd44ea8a7097c0cf080a966b4af6ef45d27cc88" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.057404 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-gmfsq" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.432208 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-hd622"] Dec 03 10:12:26 crc kubenswrapper[4856]: E1203 10:12:26.432613 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030aa939-13bc-458d-b44e-b6ffe40d5853" containerName="container-00" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.432626 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="030aa939-13bc-458d-b44e-b6ffe40d5853" containerName="container-00" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.432900 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="030aa939-13bc-458d-b44e-b6ffe40d5853" containerName="container-00" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.433588 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.458849 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.459186 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.561412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.561538 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.561669 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.581969 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg\") pod \"crc-debug-hd622\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.702683 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030aa939-13bc-458d-b44e-b6ffe40d5853" path="/var/lib/kubelet/pods/030aa939-13bc-458d-b44e-b6ffe40d5853/volumes" Dec 03 10:12:26 crc kubenswrapper[4856]: I1203 10:12:26.763973 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:27 crc kubenswrapper[4856]: I1203 10:12:27.068068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-hd622" event={"ID":"f29df187-979f-404a-92c4-ba5b1435e184","Type":"ContainerStarted","Data":"71b9068c76bf375a91720afbb2133d3e2bffd200fa5c2190dbef1d283659f6d6"} Dec 03 10:12:27 crc kubenswrapper[4856]: I1203 10:12:27.068448 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-hd622" event={"ID":"f29df187-979f-404a-92c4-ba5b1435e184","Type":"ContainerStarted","Data":"af01c8b9487ad3de5dff5365f7b767a5db72ebc3a3a81e4c1ad0d4c205a0b823"} Dec 03 10:12:27 crc kubenswrapper[4856]: I1203 10:12:27.104088 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t5pcp/crc-debug-hd622" podStartSLOduration=1.103988557 podStartE2EDuration="1.103988557s" podCreationTimestamp="2025-12-03 10:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:12:27.083455879 +0000 UTC m=+3615.266348210" watchObservedRunningTime="2025-12-03 10:12:27.103988557 +0000 UTC m=+3615.286880858" Dec 03 10:12:28 crc kubenswrapper[4856]: I1203 10:12:28.085089 4856 generic.go:334] "Generic (PLEG): container finished" podID="f29df187-979f-404a-92c4-ba5b1435e184" containerID="71b9068c76bf375a91720afbb2133d3e2bffd200fa5c2190dbef1d283659f6d6" exitCode=0 Dec 03 10:12:28 crc kubenswrapper[4856]: I1203 10:12:28.085164 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-hd622" event={"ID":"f29df187-979f-404a-92c4-ba5b1435e184","Type":"ContainerDied","Data":"71b9068c76bf375a91720afbb2133d3e2bffd200fa5c2190dbef1d283659f6d6"} Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.229726 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.289913 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-hd622"] Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.300749 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-hd622"] Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.415745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host\") pod \"f29df187-979f-404a-92c4-ba5b1435e184\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.415863 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg\") pod \"f29df187-979f-404a-92c4-ba5b1435e184\" (UID: \"f29df187-979f-404a-92c4-ba5b1435e184\") " Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.416015 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host" (OuterVolumeSpecName: "host") pod "f29df187-979f-404a-92c4-ba5b1435e184" (UID: "f29df187-979f-404a-92c4-ba5b1435e184"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.416929 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f29df187-979f-404a-92c4-ba5b1435e184-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.425103 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg" (OuterVolumeSpecName: "kube-api-access-kr7kg") pod "f29df187-979f-404a-92c4-ba5b1435e184" (UID: "f29df187-979f-404a-92c4-ba5b1435e184"). InnerVolumeSpecName "kube-api-access-kr7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:12:29 crc kubenswrapper[4856]: I1203 10:12:29.519109 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/f29df187-979f-404a-92c4-ba5b1435e184-kube-api-access-kr7kg\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.118983 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af01c8b9487ad3de5dff5365f7b767a5db72ebc3a3a81e4c1ad0d4c205a0b823" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.119453 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-hd622" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.549311 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-d49t5"] Dec 03 10:12:30 crc kubenswrapper[4856]: E1203 10:12:30.550028 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29df187-979f-404a-92c4-ba5b1435e184" containerName="container-00" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.550061 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29df187-979f-404a-92c4-ba5b1435e184" containerName="container-00" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.550536 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29df187-979f-404a-92c4-ba5b1435e184" containerName="container-00" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.551755 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.645000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.645560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnjz\" (UniqueName: \"kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.707893 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29df187-979f-404a-92c4-ba5b1435e184" path="/var/lib/kubelet/pods/f29df187-979f-404a-92c4-ba5b1435e184/volumes" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.747531 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnjz\" (UniqueName: \"kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.747974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.748117 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.784547 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnjz\" (UniqueName: \"kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz\") pod \"crc-debug-d49t5\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:30 crc kubenswrapper[4856]: I1203 10:12:30.881978 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:31 crc kubenswrapper[4856]: I1203 10:12:31.128650 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" event={"ID":"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95","Type":"ContainerStarted","Data":"ed57938c4c9966294862cc31a1fc449f957f5065ec7a0995bb8df38436f41a1e"} Dec 03 10:12:32 crc kubenswrapper[4856]: I1203 10:12:32.145881 4856 generic.go:334] "Generic (PLEG): container finished" podID="ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" containerID="89276689b0d3448e11d649010670a6cd6377194adda5ff962d3dda3ff44824fc" exitCode=0 Dec 03 10:12:32 crc kubenswrapper[4856]: I1203 10:12:32.145966 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" event={"ID":"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95","Type":"ContainerDied","Data":"89276689b0d3448e11d649010670a6cd6377194adda5ff962d3dda3ff44824fc"} Dec 03 10:12:32 crc kubenswrapper[4856]: I1203 10:12:32.218647 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-d49t5"] Dec 03 10:12:32 crc kubenswrapper[4856]: I1203 10:12:32.235021 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t5pcp/crc-debug-d49t5"] Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.261928 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.334949 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host\") pod \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.335101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host" (OuterVolumeSpecName: "host") pod "ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" (UID: "ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.335303 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbnjz\" (UniqueName: \"kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz\") pod \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\" (UID: \"ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95\") " Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.335904 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.346087 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz" (OuterVolumeSpecName: "kube-api-access-dbnjz") pod "ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" (UID: "ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95"). InnerVolumeSpecName "kube-api-access-dbnjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:12:33 crc kubenswrapper[4856]: I1203 10:12:33.437716 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbnjz\" (UniqueName: \"kubernetes.io/projected/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95-kube-api-access-dbnjz\") on node \"crc\" DevicePath \"\"" Dec 03 10:12:34 crc kubenswrapper[4856]: I1203 10:12:34.165644 4856 scope.go:117] "RemoveContainer" containerID="89276689b0d3448e11d649010670a6cd6377194adda5ff962d3dda3ff44824fc" Dec 03 10:12:34 crc kubenswrapper[4856]: I1203 10:12:34.165700 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/crc-debug-d49t5" Dec 03 10:12:34 crc kubenswrapper[4856]: I1203 10:12:34.702704 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" path="/var/lib/kubelet/pods/ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95/volumes" Dec 03 10:12:48 crc kubenswrapper[4856]: I1203 10:12:48.368929 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f7df59b96-db6lp_dc7f85ba-81a1-4b35-8620-2c24b08b5101/barbican-api/0.log" Dec 03 10:12:48 crc kubenswrapper[4856]: I1203 10:12:48.471384 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f7df59b96-db6lp_dc7f85ba-81a1-4b35-8620-2c24b08b5101/barbican-api-log/0.log" Dec 03 10:12:48 crc kubenswrapper[4856]: I1203 10:12:48.769227 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-545b57f4f4-cmb44_a8e157e2-2dcf-4664-9b48-1e6186729ef0/barbican-keystone-listener/0.log" Dec 03 10:12:48 crc kubenswrapper[4856]: I1203 10:12:48.864337 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-545b57f4f4-cmb44_a8e157e2-2dcf-4664-9b48-1e6186729ef0/barbican-keystone-listener-log/0.log" Dec 03 10:12:48 crc kubenswrapper[4856]: I1203 10:12:48.999361 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68d6cb77d9-4m8kq_1a742807-921a-47f8-883b-10c4b972c350/barbican-worker-log/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.000717 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68d6cb77d9-4m8kq_1a742807-921a-47f8-883b-10c4b972c350/barbican-worker/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.166168 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bff68_1b94e685-696f-4e31-8296-a234c7767af2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.268352 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/ceilometer-central-agent/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.369830 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/proxy-httpd/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.480778 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/sg-core/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.601645 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/ceilometer-notification-agent/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.645165 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34eb0c70-af06-4124-a1e5-fd6010205b6d/cinder-api/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.782355 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34eb0c70-af06-4124-a1e5-fd6010205b6d/cinder-api-log/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.844092 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e3e3b449-8b8e-497a-bccc-c2aa4c81861d/cinder-scheduler/0.log" Dec 03 10:12:49 crc kubenswrapper[4856]: I1203 10:12:49.910360 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e3e3b449-8b8e-497a-bccc-c2aa4c81861d/probe/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.080015 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8srhg_1031be18-e812-46f1-9377-792b9dd841c0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.128559 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh_a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.255694 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/init/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.515074 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/init/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.546918 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/dnsmasq-dns/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.667270 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2_a22816e8-f368-454d-93c3-762e0d5e88d7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.854445 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ebc0dc7-337a-46c5-ae8e-98ca475977a0/glance-log/0.log" Dec 03 10:12:50 crc kubenswrapper[4856]: I1203 10:12:50.891246 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ebc0dc7-337a-46c5-ae8e-98ca475977a0/glance-httpd/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.080985 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5006ab2-d2cb-45a1-b5b4-496b36d94bf2/glance-httpd/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.098188 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5006ab2-d2cb-45a1-b5b4-496b36d94bf2/glance-log/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.283115 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-569f95b-qhsts_7a3ced31-90f7-4932-999e-49e914166624/horizon/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.528510 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-569f95b-qhsts_7a3ced31-90f7-4932-999e-49e914166624/horizon-log/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.713262 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6_77973943-428f-4578-91c1-ed94f2616c7e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:51 crc kubenswrapper[4856]: I1203 10:12:51.819615 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2v6dt_eed4d3c5-8f3d-4fbb-8eb9-197302785490/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.097556 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-866db7fbbf-khsgj_a52f4628-4166-45bf-893f-98155011723d/keystone-api/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.119903 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29412601-p5k85_c97ba672-751d-4c49-b856-c5b4c6ead955/keystone-cron/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.347564 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_255f4336-240a-4793-88a0-a2f6da40c0b8/kube-state-metrics/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.491633 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz_484332af-13c0-4270-932a-181a6b3f879c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.759456 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.760677 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.840354 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7844d7bfd9-p972t_d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8/neutron-httpd/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.848788 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7844d7bfd9-p972t_d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8/neutron-api/0.log" Dec 03 10:12:52 crc kubenswrapper[4856]: I1203 10:12:52.892443 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49_e442bbaf-f226-4bed-a454-bbbaf90e44ff/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:53 crc kubenswrapper[4856]: I1203 10:12:53.562567 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1d29f7b-f4ed-4266-8713-a7252ca355fe/nova-api-log/0.log" Dec 03 10:12:53 crc kubenswrapper[4856]: I1203 10:12:53.614791 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_509448a9-9abb-4e44-b37f-79faeadec13e/nova-cell0-conductor-conductor/0.log" Dec 03 10:12:53 crc kubenswrapper[4856]: I1203 10:12:53.801340 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b/nova-cell1-conductor-conductor/0.log" Dec 03 10:12:53 crc kubenswrapper[4856]: I1203 10:12:53.844766 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1d29f7b-f4ed-4266-8713-a7252ca355fe/nova-api-api/0.log" Dec 03 10:12:53 crc kubenswrapper[4856]: I1203 10:12:53.956521 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7ec5a006-1571-475d-8f44-d12cb737563b/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 10:12:54 crc kubenswrapper[4856]: I1203 10:12:54.141154 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w9n5f_ebee317e-98e4-499f-91e9-fefdaa0dd0e3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:54 crc kubenswrapper[4856]: I1203 10:12:54.244569 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7384e37c-9204-4c80-9119-3c5454f32c80/nova-metadata-log/0.log" Dec 03 10:12:54 crc kubenswrapper[4856]: I1203 10:12:54.638384 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b253f904-482d-4e19-b899-0304f9382759/nova-scheduler-scheduler/0.log" Dec 03 10:12:54 crc kubenswrapper[4856]: I1203 10:12:54.667542 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/mysql-bootstrap/0.log" Dec 03 10:12:54 crc kubenswrapper[4856]: I1203 10:12:54.921921 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/mysql-bootstrap/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.053117 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/galera/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.157234 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/mysql-bootstrap/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.336211 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/mysql-bootstrap/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.430444 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/galera/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.563701 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b26ba3fd-c881-44a2-a613-17d2ee4da042/openstackclient/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.646671 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7tm2h_da1b289d-32ea-4bbb-a203-d208e0267f9b/ovn-controller/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.716748 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7384e37c-9204-4c80-9119-3c5454f32c80/nova-metadata-metadata/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.880430 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wcs7x_9a73fb24-fb9d-4037-b540-fcadcd423024/openstack-network-exporter/0.log" Dec 03 10:12:55 crc kubenswrapper[4856]: I1203 10:12:55.961477 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server-init/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.157817 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovs-vswitchd/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.216698 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.228387 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server-init/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.453134 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b9109180-5009-4b2b-b2ff-b56e90bf72aa/openstack-network-exporter/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.504189 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-95flr_37eb2f8b-1352-4ee3-9f78-afe97fd4ad90/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.591007 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b9109180-5009-4b2b-b2ff-b56e90bf72aa/ovn-northd/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.705130 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5596d8aa-639a-4e4f-8905-ceb3cbb622cd/openstack-network-exporter/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.726707 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5596d8aa-639a-4e4f-8905-ceb3cbb622cd/ovsdbserver-nb/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.911200 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9dcb6c56-c540-463d-a481-0de5eb693e2b/openstack-network-exporter/0.log" Dec 03 10:12:56 crc kubenswrapper[4856]: I1203 10:12:56.940178 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9dcb6c56-c540-463d-a481-0de5eb693e2b/ovsdbserver-sb/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.174310 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-549d5987fb-kphsk_f92cb955-92c8-46d4-adbf-f8de7330cd2c/placement-api/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.231094 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/setup-container/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.327756 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-549d5987fb-kphsk_f92cb955-92c8-46d4-adbf-f8de7330cd2c/placement-log/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.548688 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/setup-container/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.617982 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/rabbitmq/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.621363 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/setup-container/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.895281 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq_161b93b3-c4c6-4f99-a419-f00bed34b046/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.903393 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/setup-container/0.log" Dec 03 10:12:57 crc kubenswrapper[4856]: I1203 10:12:57.964134 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/rabbitmq/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.204345 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gbhrt_c0e24ff8-9624-4934-852f-57249413e4ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.258635 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s_1fc94321-b8f5-471b-9114-c93f984f9ac7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.609495 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6k8w6_20a18377-44d6-4f4e-b2a4-24470b9bf24e/ssh-known-hosts-edpm-deployment/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.630053 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ksn7f_2cb5d44f-36f7-4bf5-b688-ed331f254afd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.863220 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d5fb5d859-8njp2_295f1863-c8b3-4e9a-b09f-24c393ac167c/proxy-server/0.log" Dec 03 10:12:58 crc kubenswrapper[4856]: I1203 10:12:58.943751 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d5fb5d859-8njp2_295f1863-c8b3-4e9a-b09f-24c393ac167c/proxy-httpd/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.120364 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-frh7v_320b56b0-4905-4a31-bc37-13106b993909/swift-ring-rebalance/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.154313 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-auditor/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.212748 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-reaper/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.363932 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-replicator/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.407247 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-auditor/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.414058 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-server/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.443505 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-replicator/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.563604 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-server/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.575373 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-updater/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.675444 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-auditor/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.677779 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-expirer/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.766615 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-replicator/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.891194 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-server/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.892552 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-updater/0.log" Dec 03 10:12:59 crc kubenswrapper[4856]: I1203 10:12:59.936555 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/rsync/0.log" Dec 03 10:13:00 crc kubenswrapper[4856]: I1203 10:13:00.019475 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/swift-recon-cron/0.log" Dec 03 10:13:00 crc kubenswrapper[4856]: I1203 10:13:00.174431 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm_f61fc35d-84b0-4d7c-8567-5457a1adfc58/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:13:00 crc kubenswrapper[4856]: I1203 10:13:00.268881 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_105b5a9b-c81b-43d5-bea0-7bfd062ed807/tempest-tests-tempest-tests-runner/0.log" Dec 03 10:13:00 crc kubenswrapper[4856]: I1203 10:13:00.424421 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9cbd1b10-8bcd-4759-8c6a-2db25e06eadd/test-operator-logs-container/0.log" Dec 03 10:13:00 crc kubenswrapper[4856]: I1203 10:13:00.511372 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp_6cdb1761-f836-42e1-a1d2-e52ccf41594b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:13:09 crc kubenswrapper[4856]: I1203 10:13:09.795044 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5025473d-5c66-4550-90f1-5e4988fcbd9e/memcached/0.log" Dec 03 10:13:22 crc kubenswrapper[4856]: I1203 10:13:22.758768 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:13:22 crc kubenswrapper[4856]: I1203 10:13:22.759255 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:13:22 crc kubenswrapper[4856]: I1203 10:13:22.759304 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:13:22 crc kubenswrapper[4856]: I1203 10:13:22.760082 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:13:22 crc kubenswrapper[4856]: I1203 10:13:22.760133 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709" gracePeriod=600 Dec 03 10:13:23 crc kubenswrapper[4856]: I1203 10:13:23.718743 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709" exitCode=0 Dec 03 10:13:23 crc kubenswrapper[4856]: I1203 10:13:23.718839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709"} Dec 03 10:13:23 crc kubenswrapper[4856]: I1203 10:13:23.719082 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a"} Dec 03 10:13:23 crc kubenswrapper[4856]: I1203 10:13:23.719107 4856 scope.go:117] "RemoveContainer" containerID="4de4afd0ed660193d2aabbf1596f25a18e7211f9da27cdddebc909226e7a1d20" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.004642 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.178660 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.179744 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.261740 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.432453 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.440571 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.478141 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/extract/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.645410 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4m4j5_ab444944-5290-47a9-a2ca-8c544c5350b6/kube-rbac-proxy/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.672498 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4m4j5_ab444944-5290-47a9-a2ca-8c544c5350b6/manager/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.849029 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cw2wq_445299d7-37a7-4fa0-a50c-e81643492293/kube-rbac-proxy/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.960366 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cw2wq_445299d7-37a7-4fa0-a50c-e81643492293/manager/0.log" Dec 03 10:13:28 crc kubenswrapper[4856]: I1203 10:13:28.972002 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2j5bq_65f07af7-c89a-403d-866e-f98462398697/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.112952 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2j5bq_65f07af7-c89a-403d-866e-f98462398697/manager/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.261519 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bmn9s_0f952368-5565-442c-8bcb-aa61130cb3c7/manager/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.284417 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bmn9s_0f952368-5565-442c-8bcb-aa61130cb3c7/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.413331 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8qjgb_61066eb5-99e6-4ec9-9dea-3d2ecd8d456e/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.453729 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8qjgb_61066eb5-99e6-4ec9-9dea-3d2ecd8d456e/manager/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.588763 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-88k4v_4a3f5eb0-4264-4034-8c1f-4d8b53af8b21/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.681643 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-88k4v_4a3f5eb0-4264-4034-8c1f-4d8b53af8b21/manager/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.761969 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-x8vm4_d3707fa3-12a0-490e-baac-1fd0ce34fbd5/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.903865 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cg4vl_bb90cacb-d6f2-4e30-a694-21cccff0a5d1/kube-rbac-proxy/0.log" Dec 03 10:13:29 crc kubenswrapper[4856]: I1203 10:13:29.994465 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-x8vm4_d3707fa3-12a0-490e-baac-1fd0ce34fbd5/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.045066 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cg4vl_bb90cacb-d6f2-4e30-a694-21cccff0a5d1/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.187728 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-txg26_24007279-b1cb-4d5b-aca4-c55d0cd825b7/kube-rbac-proxy/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.257221 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-txg26_24007279-b1cb-4d5b-aca4-c55d0cd825b7/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.360180 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-xfpwp_0de81d3b-bbf7-455a-8842-2261010f69a2/kube-rbac-proxy/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.365019 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-xfpwp_0de81d3b-bbf7-455a-8842-2261010f69a2/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.544397 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-72dlj_352d270a-b735-411d-87ba-58719ee0f984/kube-rbac-proxy/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.626789 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-72dlj_352d270a-b735-411d-87ba-58719ee0f984/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.757287 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dhp69_e889c7da-b8e2-46bb-b700-f700e7e969bc/kube-rbac-proxy/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.786840 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dhp69_e889c7da-b8e2-46bb-b700-f700e7e969bc/manager/0.log" Dec 03 10:13:30 crc kubenswrapper[4856]: I1203 10:13:30.888089 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jg8fc_1bfd8278-80a9-41ca-a89e-400e8b62188f/kube-rbac-proxy/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.074558 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jg8fc_1bfd8278-80a9-41ca-a89e-400e8b62188f/manager/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.088015 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-dbfch_1c5cce87-5371-47df-8471-7725731c9908/kube-rbac-proxy/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.156313 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-dbfch_1c5cce87-5371-47df-8471-7725731c9908/manager/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.355262 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59_77383a17-c2e3-4f54-8296-414e707e2056/kube-rbac-proxy/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.407940 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59_77383a17-c2e3-4f54-8296-414e707e2056/manager/0.log" Dec 03 10:13:31 crc kubenswrapper[4856]: I1203 10:13:31.890002 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bf68648df-qkkzg_d0f9ba14-7b89-4373-a0f6-67ceb97ffb71/operator/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.098367 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nj5zz_43d6261a-49c7-40ca-8403-1fa273ef863c/registry-server/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.220864 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-t6jdv_2513c74a-1905-4f71-bf3c-c71095d756d3/kube-rbac-proxy/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.406609 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mkl9t_a13d32b8-0032-4c2b-9985-f865d89becdc/kube-rbac-proxy/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.422366 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-t6jdv_2513c74a-1905-4f71-bf3c-c71095d756d3/manager/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.518956 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mkl9t_a13d32b8-0032-4c2b-9985-f865d89becdc/manager/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.653217 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-864649db6c-vtrz8_daec5857-0ffc-4499-af65-5f9d7ef6baf9/manager/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.722143 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rm6rm_25e84c2c-bca6-438b-ad4c-f7154e1ba97a/operator/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.765232 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-2rqwt_169cc116-1edd-4af9-b992-4bdb8e912231/kube-rbac-proxy/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.879349 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-2rqwt_169cc116-1edd-4af9-b992-4bdb8e912231/manager/0.log" Dec 03 10:13:32 crc kubenswrapper[4856]: I1203 10:13:32.940354 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kfkmm_38e8c4db-27d8-4ffa-98e8-0859bec1243c/kube-rbac-proxy/0.log" Dec 03 10:13:33 crc kubenswrapper[4856]: I1203 10:13:33.027692 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kfkmm_38e8c4db-27d8-4ffa-98e8-0859bec1243c/manager/0.log" Dec 03 10:13:33 crc kubenswrapper[4856]: I1203 10:13:33.107749 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h5j7g_8e9597d6-d043-4377-9b86-cf94a5df8ddf/kube-rbac-proxy/0.log" Dec 03 10:13:33 crc kubenswrapper[4856]: I1203 10:13:33.167331 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h5j7g_8e9597d6-d043-4377-9b86-cf94a5df8ddf/manager/0.log" Dec 03 10:13:33 crc kubenswrapper[4856]: I1203 10:13:33.260611 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-nccbk_d82e1df1-d3d5-4f54-874f-291e3d82aac6/manager/0.log" Dec 03 10:13:33 crc kubenswrapper[4856]: I1203 10:13:33.262685 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-nccbk_d82e1df1-d3d5-4f54-874f-291e3d82aac6/kube-rbac-proxy/0.log" Dec 03 10:13:53 crc kubenswrapper[4856]: I1203 10:13:53.739442 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-85pdd_7b9e41f7-3b5b-461d-b0d9-a28daec02d37/control-plane-machine-set-operator/0.log" Dec 03 10:13:53 crc kubenswrapper[4856]: I1203 10:13:53.962142 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4k754_f3c06506-5c89-4e8b-92c2-c4886d17b6df/machine-api-operator/0.log" Dec 03 10:13:53 crc kubenswrapper[4856]: I1203 10:13:53.972142 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4k754_f3c06506-5c89-4e8b-92c2-c4886d17b6df/kube-rbac-proxy/0.log" Dec 03 10:14:08 crc kubenswrapper[4856]: I1203 10:14:08.234805 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-9rpmm_1f803911-3cdc-40bf-8849-0a94fdf62f5c/cert-manager-controller/0.log" Dec 03 10:14:08 crc kubenswrapper[4856]: I1203 10:14:08.322225 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-x7sf7_d69625a4-8ce2-415d-ae2f-e0b5e3e63c96/cert-manager-cainjector/0.log" Dec 03 10:14:08 crc kubenswrapper[4856]: I1203 10:14:08.441191 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-84d4p_a0555a5a-1ddc-46ae-b98a-7e4baa736e35/cert-manager-webhook/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.222983 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-2ps5x_bca24c59-d3d7-42fb-a6b5-226bece344db/nmstate-console-plugin/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.460740 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gwwkk_df2e4dae-23c5-4f2a-978d-1e7293553f21/nmstate-handler/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.477494 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sckh8_a981397a-976a-4fc4-8cb9-af2d72410121/kube-rbac-proxy/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.538883 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sckh8_a981397a-976a-4fc4-8cb9-af2d72410121/nmstate-metrics/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.636801 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-88642_05257373-572f-4663-911d-8f50b368b390/nmstate-operator/0.log" Dec 03 10:14:22 crc kubenswrapper[4856]: I1203 10:14:22.746232 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-vw9qx_e91cc63e-1f90-4427-942e-fe3645f8ee86/nmstate-webhook/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.345763 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hpb7s_c049cfb9-a9ea-4348-88d4-40aacaf0c01a/kube-rbac-proxy/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.407359 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hpb7s_c049cfb9-a9ea-4348-88d4-40aacaf0c01a/controller/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.474264 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.599477 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.605424 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.657018 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.657952 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.867412 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.885446 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.885499 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:14:37 crc kubenswrapper[4856]: I1203 10:14:37.892144 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.055985 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.058113 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.064140 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.136850 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/controller/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.252416 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/kube-rbac-proxy/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.260562 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/frr-metrics/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.352275 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/kube-rbac-proxy-frr/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.511907 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/reloader/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.592830 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bcpx9_65ad74a6-7267-45f3-b6c2-898a3906758b/frr-k8s-webhook-server/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.871110 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67d6765696-kvt2f_b3ac3c18-dee0-4de8-8380-93060d971722/manager/0.log" Dec 03 10:14:38 crc kubenswrapper[4856]: I1203 10:14:38.922038 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-db87bc449-r6ztx_ae371cb4-e8b4-40ea-a590-884cf5feae1f/webhook-server/0.log" Dec 03 10:14:39 crc kubenswrapper[4856]: I1203 10:14:39.081165 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q26bm_0142725c-0a39-4e1c-bef6-a3027f105162/kube-rbac-proxy/0.log" Dec 03 10:14:39 crc kubenswrapper[4856]: I1203 10:14:39.639766 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q26bm_0142725c-0a39-4e1c-bef6-a3027f105162/speaker/0.log" Dec 03 10:14:39 crc kubenswrapper[4856]: I1203 10:14:39.739697 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/frr/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.160467 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.340398 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.359345 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.388900 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.573722 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.579861 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/extract/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.585954 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:14:52 crc kubenswrapper[4856]: I1203 10:14:52.762736 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.018130 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.046221 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.048087 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.275740 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/extract/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.329419 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.357653 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.482663 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.609676 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.653259 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.657419 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.832585 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:14:53 crc kubenswrapper[4856]: I1203 10:14:53.847787 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.044784 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/registry-server/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.094949 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.234210 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.240776 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.243430 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.395746 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.440580 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.700189 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7nlmw_87add44f-39e7-460b-9f01-d5aa27e44491/marketplace-operator/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.729734 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.888654 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/registry-server/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.928609 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.978937 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:14:54 crc kubenswrapper[4856]: I1203 10:14:54.982034 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.112087 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.114184 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.272313 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.277598 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/registry-server/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.502096 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.520655 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.522348 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.664640 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:14:55 crc kubenswrapper[4856]: I1203 10:14:55.679451 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:14:56 crc kubenswrapper[4856]: I1203 10:14:56.118844 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/registry-server/0.log" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.188998 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5"] Dec 03 10:15:00 crc kubenswrapper[4856]: E1203 10:15:00.189986 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" containerName="container-00" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.190003 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" containerName="container-00" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.190275 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9f3cb0-689b-40a2-b657-e3bc4a1f3d95" containerName="container-00" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.191068 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.194017 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.199296 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.203391 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5"] Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.381162 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmkt\" (UniqueName: \"kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.381224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.381447 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.483114 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.483303 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhmkt\" (UniqueName: \"kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.483340 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.484222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.489510 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.505017 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhmkt\" (UniqueName: \"kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt\") pod \"collect-profiles-29412615-7q7g5\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:00 crc kubenswrapper[4856]: I1203 10:15:00.517082 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:01 crc kubenswrapper[4856]: I1203 10:15:01.025740 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5"] Dec 03 10:15:01 crc kubenswrapper[4856]: I1203 10:15:01.694297 4856 generic.go:334] "Generic (PLEG): container finished" podID="f256eac8-3cf1-4d1b-aa10-3196e274afa4" containerID="79385294299fae64015eef4120710e71320ba8bd1f9e7c75cf7ce473697541c7" exitCode=0 Dec 03 10:15:01 crc kubenswrapper[4856]: I1203 10:15:01.694347 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" event={"ID":"f256eac8-3cf1-4d1b-aa10-3196e274afa4","Type":"ContainerDied","Data":"79385294299fae64015eef4120710e71320ba8bd1f9e7c75cf7ce473697541c7"} Dec 03 10:15:01 crc kubenswrapper[4856]: I1203 10:15:01.694579 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" event={"ID":"f256eac8-3cf1-4d1b-aa10-3196e274afa4","Type":"ContainerStarted","Data":"da07dbb854640c059a444ce2ff5b90368726a25ade3af5c08db4bf8d035a469b"} Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.027548 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.140179 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume\") pod \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.140690 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume\") pod \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.140793 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume" (OuterVolumeSpecName: "config-volume") pod "f256eac8-3cf1-4d1b-aa10-3196e274afa4" (UID: "f256eac8-3cf1-4d1b-aa10-3196e274afa4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.140899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhmkt\" (UniqueName: \"kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt\") pod \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\" (UID: \"f256eac8-3cf1-4d1b-aa10-3196e274afa4\") " Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.141523 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f256eac8-3cf1-4d1b-aa10-3196e274afa4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.149376 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt" (OuterVolumeSpecName: "kube-api-access-hhmkt") pod "f256eac8-3cf1-4d1b-aa10-3196e274afa4" (UID: "f256eac8-3cf1-4d1b-aa10-3196e274afa4"). InnerVolumeSpecName "kube-api-access-hhmkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.150891 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f256eac8-3cf1-4d1b-aa10-3196e274afa4" (UID: "f256eac8-3cf1-4d1b-aa10-3196e274afa4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.243445 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f256eac8-3cf1-4d1b-aa10-3196e274afa4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.243512 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhmkt\" (UniqueName: \"kubernetes.io/projected/f256eac8-3cf1-4d1b-aa10-3196e274afa4-kube-api-access-hhmkt\") on node \"crc\" DevicePath \"\"" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.720028 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" event={"ID":"f256eac8-3cf1-4d1b-aa10-3196e274afa4","Type":"ContainerDied","Data":"da07dbb854640c059a444ce2ff5b90368726a25ade3af5c08db4bf8d035a469b"} Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.720082 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da07dbb854640c059a444ce2ff5b90368726a25ade3af5c08db4bf8d035a469b" Dec 03 10:15:03 crc kubenswrapper[4856]: I1203 10:15:03.720154 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412615-7q7g5" Dec 03 10:15:04 crc kubenswrapper[4856]: I1203 10:15:04.127127 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8"] Dec 03 10:15:04 crc kubenswrapper[4856]: I1203 10:15:04.136246 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412570-2lcr8"] Dec 03 10:15:04 crc kubenswrapper[4856]: I1203 10:15:04.703840 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11af8d9b-d6bc-43a5-9a08-cd946ac9acac" path="/var/lib/kubelet/pods/11af8d9b-d6bc-43a5-9a08-cd946ac9acac/volumes" Dec 03 10:15:39 crc kubenswrapper[4856]: I1203 10:15:39.790491 4856 scope.go:117] "RemoveContainer" containerID="8ad3bef55691a965283b0d81790846eb98cb16e139bafcd6b48e710524595e8a" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.758499 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.759146 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.843782 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:15:52 crc kubenswrapper[4856]: E1203 10:15:52.844670 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f256eac8-3cf1-4d1b-aa10-3196e274afa4" containerName="collect-profiles" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.844697 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f256eac8-3cf1-4d1b-aa10-3196e274afa4" containerName="collect-profiles" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.845080 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f256eac8-3cf1-4d1b-aa10-3196e274afa4" containerName="collect-profiles" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.847533 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.863283 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.988885 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.988998 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrgn\" (UniqueName: \"kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:52 crc kubenswrapper[4856]: I1203 10:15:52.989063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.090795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.090938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrgn\" (UniqueName: \"kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.090979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.091432 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.091462 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.115655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrgn\" (UniqueName: \"kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn\") pod \"redhat-operators-c7nsb\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.178068 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:15:53 crc kubenswrapper[4856]: I1203 10:15:53.646552 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:15:54 crc kubenswrapper[4856]: I1203 10:15:54.244346 4856 generic.go:334] "Generic (PLEG): container finished" podID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerID="c4db06c28d0b69c709489e65f617424e08d5c9bf372d454b102bb2666c71e413" exitCode=0 Dec 03 10:15:54 crc kubenswrapper[4856]: I1203 10:15:54.244515 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerDied","Data":"c4db06c28d0b69c709489e65f617424e08d5c9bf372d454b102bb2666c71e413"} Dec 03 10:15:54 crc kubenswrapper[4856]: I1203 10:15:54.244680 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerStarted","Data":"cc6031ad25dbcb7e907731e0aa71879fa01f3637f9e78fad4103c8ef54e77beb"} Dec 03 10:15:55 crc kubenswrapper[4856]: I1203 10:15:55.257591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerStarted","Data":"595d1da5ccf1c576179e674471ba1fdf459118adc6ed2a157bb5f3888232f889"} Dec 03 10:15:56 crc kubenswrapper[4856]: I1203 10:15:56.336133 4856 generic.go:334] "Generic (PLEG): container finished" podID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerID="595d1da5ccf1c576179e674471ba1fdf459118adc6ed2a157bb5f3888232f889" exitCode=0 Dec 03 10:15:56 crc kubenswrapper[4856]: I1203 10:15:56.336191 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerDied","Data":"595d1da5ccf1c576179e674471ba1fdf459118adc6ed2a157bb5f3888232f889"} Dec 03 10:15:58 crc kubenswrapper[4856]: I1203 10:15:58.357617 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerStarted","Data":"1b645224681bf77469c700f6a2cbc629de44e6bb308b648cdd1ff1f5ff0660eb"} Dec 03 10:16:03 crc kubenswrapper[4856]: I1203 10:16:03.178219 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:03 crc kubenswrapper[4856]: I1203 10:16:03.178868 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:04 crc kubenswrapper[4856]: I1203 10:16:04.225020 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c7nsb" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="registry-server" probeResult="failure" output=< Dec 03 10:16:04 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Dec 03 10:16:04 crc kubenswrapper[4856]: > Dec 03 10:16:13 crc kubenswrapper[4856]: I1203 10:16:13.249581 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:13 crc kubenswrapper[4856]: I1203 10:16:13.296406 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c7nsb" podStartSLOduration=18.155896344 podStartE2EDuration="21.296372918s" podCreationTimestamp="2025-12-03 10:15:52 +0000 UTC" firstStartedPulling="2025-12-03 10:15:54.24647567 +0000 UTC m=+3822.429367971" lastFinishedPulling="2025-12-03 10:15:57.386952244 +0000 UTC m=+3825.569844545" observedRunningTime="2025-12-03 10:15:58.382643813 +0000 UTC m=+3826.565536134" watchObservedRunningTime="2025-12-03 10:16:13.296372918 +0000 UTC m=+3841.479265279" Dec 03 10:16:13 crc kubenswrapper[4856]: I1203 10:16:13.316710 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:13 crc kubenswrapper[4856]: I1203 10:16:13.507609 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:16:14 crc kubenswrapper[4856]: I1203 10:16:14.554526 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c7nsb" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="registry-server" containerID="cri-o://1b645224681bf77469c700f6a2cbc629de44e6bb308b648cdd1ff1f5ff0660eb" gracePeriod=2 Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.570025 4856 generic.go:334] "Generic (PLEG): container finished" podID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerID="1b645224681bf77469c700f6a2cbc629de44e6bb308b648cdd1ff1f5ff0660eb" exitCode=0 Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.570097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerDied","Data":"1b645224681bf77469c700f6a2cbc629de44e6bb308b648cdd1ff1f5ff0660eb"} Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.570397 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7nsb" event={"ID":"27751d4f-48bc-4360-9a31-e45c21e97ae1","Type":"ContainerDied","Data":"cc6031ad25dbcb7e907731e0aa71879fa01f3637f9e78fad4103c8ef54e77beb"} Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.570414 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6031ad25dbcb7e907731e0aa71879fa01f3637f9e78fad4103c8ef54e77beb" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.596712 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.731374 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content\") pod \"27751d4f-48bc-4360-9a31-e45c21e97ae1\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.731450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbrgn\" (UniqueName: \"kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn\") pod \"27751d4f-48bc-4360-9a31-e45c21e97ae1\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.731628 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities\") pod \"27751d4f-48bc-4360-9a31-e45c21e97ae1\" (UID: \"27751d4f-48bc-4360-9a31-e45c21e97ae1\") " Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.733142 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities" (OuterVolumeSpecName: "utilities") pod "27751d4f-48bc-4360-9a31-e45c21e97ae1" (UID: "27751d4f-48bc-4360-9a31-e45c21e97ae1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.740462 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn" (OuterVolumeSpecName: "kube-api-access-xbrgn") pod "27751d4f-48bc-4360-9a31-e45c21e97ae1" (UID: "27751d4f-48bc-4360-9a31-e45c21e97ae1"). InnerVolumeSpecName "kube-api-access-xbrgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.834145 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbrgn\" (UniqueName: \"kubernetes.io/projected/27751d4f-48bc-4360-9a31-e45c21e97ae1-kube-api-access-xbrgn\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.834188 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.849119 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27751d4f-48bc-4360-9a31-e45c21e97ae1" (UID: "27751d4f-48bc-4360-9a31-e45c21e97ae1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:16:15 crc kubenswrapper[4856]: I1203 10:16:15.936598 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27751d4f-48bc-4360-9a31-e45c21e97ae1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:16 crc kubenswrapper[4856]: I1203 10:16:16.578295 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7nsb" Dec 03 10:16:16 crc kubenswrapper[4856]: I1203 10:16:16.620156 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:16:16 crc kubenswrapper[4856]: I1203 10:16:16.629620 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c7nsb"] Dec 03 10:16:16 crc kubenswrapper[4856]: I1203 10:16:16.713552 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" path="/var/lib/kubelet/pods/27751d4f-48bc-4360-9a31-e45c21e97ae1/volumes" Dec 03 10:16:22 crc kubenswrapper[4856]: I1203 10:16:22.758700 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:16:22 crc kubenswrapper[4856]: I1203 10:16:22.759457 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.124632 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:30 crc kubenswrapper[4856]: E1203 10:16:30.125998 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="extract-content" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.126015 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="extract-content" Dec 03 10:16:30 crc kubenswrapper[4856]: E1203 10:16:30.126032 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="registry-server" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.126038 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="registry-server" Dec 03 10:16:30 crc kubenswrapper[4856]: E1203 10:16:30.126064 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="extract-utilities" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.126072 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="extract-utilities" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.126286 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="27751d4f-48bc-4360-9a31-e45c21e97ae1" containerName="registry-server" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.127929 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.140782 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.254067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kplh\" (UniqueName: \"kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.254125 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.254216 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.356237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.357588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.357979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kplh\" (UniqueName: \"kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.362019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.362631 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.381676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kplh\" (UniqueName: \"kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh\") pod \"certified-operators-hf7g8\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:30 crc kubenswrapper[4856]: I1203 10:16:30.509823 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:31 crc kubenswrapper[4856]: I1203 10:16:31.051607 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:31 crc kubenswrapper[4856]: I1203 10:16:31.727486 4856 generic.go:334] "Generic (PLEG): container finished" podID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerID="fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31" exitCode=0 Dec 03 10:16:31 crc kubenswrapper[4856]: I1203 10:16:31.727646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerDied","Data":"fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31"} Dec 03 10:16:31 crc kubenswrapper[4856]: I1203 10:16:31.727799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerStarted","Data":"fa4943014297ddc767e6a054b858e79b34cac3f46d8f4610c83fc2b197c4def0"} Dec 03 10:16:31 crc kubenswrapper[4856]: I1203 10:16:31.733765 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 10:16:32 crc kubenswrapper[4856]: I1203 10:16:32.739302 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerStarted","Data":"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad"} Dec 03 10:16:33 crc kubenswrapper[4856]: I1203 10:16:33.749421 4856 generic.go:334] "Generic (PLEG): container finished" podID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerID="f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad" exitCode=0 Dec 03 10:16:33 crc kubenswrapper[4856]: I1203 10:16:33.749698 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerDied","Data":"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad"} Dec 03 10:16:34 crc kubenswrapper[4856]: I1203 10:16:34.763597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerStarted","Data":"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219"} Dec 03 10:16:34 crc kubenswrapper[4856]: I1203 10:16:34.791504 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hf7g8" podStartSLOduration=2.417565039 podStartE2EDuration="4.791484567s" podCreationTimestamp="2025-12-03 10:16:30 +0000 UTC" firstStartedPulling="2025-12-03 10:16:31.731608039 +0000 UTC m=+3859.914500340" lastFinishedPulling="2025-12-03 10:16:34.105527567 +0000 UTC m=+3862.288419868" observedRunningTime="2025-12-03 10:16:34.790081752 +0000 UTC m=+3862.972974063" watchObservedRunningTime="2025-12-03 10:16:34.791484567 +0000 UTC m=+3862.974376878" Dec 03 10:16:36 crc kubenswrapper[4856]: I1203 10:16:36.787092 4856 generic.go:334] "Generic (PLEG): container finished" podID="569cca77-4758-410c-bf47-6b235c903dd0" containerID="b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f" exitCode=0 Dec 03 10:16:36 crc kubenswrapper[4856]: I1203 10:16:36.787168 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t5pcp/must-gather-krrpp" event={"ID":"569cca77-4758-410c-bf47-6b235c903dd0","Type":"ContainerDied","Data":"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f"} Dec 03 10:16:36 crc kubenswrapper[4856]: I1203 10:16:36.788043 4856 scope.go:117] "RemoveContainer" containerID="b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f" Dec 03 10:16:37 crc kubenswrapper[4856]: I1203 10:16:37.729011 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t5pcp_must-gather-krrpp_569cca77-4758-410c-bf47-6b235c903dd0/gather/0.log" Dec 03 10:16:40 crc kubenswrapper[4856]: I1203 10:16:40.510058 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:40 crc kubenswrapper[4856]: I1203 10:16:40.510487 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:40 crc kubenswrapper[4856]: I1203 10:16:40.556604 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:41 crc kubenswrapper[4856]: I1203 10:16:41.572126 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:41 crc kubenswrapper[4856]: I1203 10:16:41.635608 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:42 crc kubenswrapper[4856]: I1203 10:16:42.847754 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hf7g8" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="registry-server" containerID="cri-o://76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219" gracePeriod=2 Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.313230 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.346866 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities\") pod \"1ea58e90-3dc8-4853-af64-29f584f254d6\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.347082 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content\") pod \"1ea58e90-3dc8-4853-af64-29f584f254d6\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.347172 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kplh\" (UniqueName: \"kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh\") pod \"1ea58e90-3dc8-4853-af64-29f584f254d6\" (UID: \"1ea58e90-3dc8-4853-af64-29f584f254d6\") " Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.349273 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities" (OuterVolumeSpecName: "utilities") pod "1ea58e90-3dc8-4853-af64-29f584f254d6" (UID: "1ea58e90-3dc8-4853-af64-29f584f254d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.355509 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh" (OuterVolumeSpecName: "kube-api-access-6kplh") pod "1ea58e90-3dc8-4853-af64-29f584f254d6" (UID: "1ea58e90-3dc8-4853-af64-29f584f254d6"). InnerVolumeSpecName "kube-api-access-6kplh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.407653 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ea58e90-3dc8-4853-af64-29f584f254d6" (UID: "1ea58e90-3dc8-4853-af64-29f584f254d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.449248 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.449294 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kplh\" (UniqueName: \"kubernetes.io/projected/1ea58e90-3dc8-4853-af64-29f584f254d6-kube-api-access-6kplh\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.449337 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea58e90-3dc8-4853-af64-29f584f254d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.858772 4856 generic.go:334] "Generic (PLEG): container finished" podID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerID="76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219" exitCode=0 Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.859070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerDied","Data":"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219"} Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.859097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf7g8" event={"ID":"1ea58e90-3dc8-4853-af64-29f584f254d6","Type":"ContainerDied","Data":"fa4943014297ddc767e6a054b858e79b34cac3f46d8f4610c83fc2b197c4def0"} Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.859114 4856 scope.go:117] "RemoveContainer" containerID="76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.859224 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf7g8" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.884342 4856 scope.go:117] "RemoveContainer" containerID="f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.897125 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.908907 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hf7g8"] Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.929728 4856 scope.go:117] "RemoveContainer" containerID="fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.950211 4856 scope.go:117] "RemoveContainer" containerID="76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219" Dec 03 10:16:43 crc kubenswrapper[4856]: E1203 10:16:43.950643 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219\": container with ID starting with 76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219 not found: ID does not exist" containerID="76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.950692 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219"} err="failed to get container status \"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219\": rpc error: code = NotFound desc = could not find container \"76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219\": container with ID starting with 76f736e1f2c20ec231a4c28f2766deeaef2a810f6f90056965b5d0ca308df219 not found: ID does not exist" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.950723 4856 scope.go:117] "RemoveContainer" containerID="f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad" Dec 03 10:16:43 crc kubenswrapper[4856]: E1203 10:16:43.951099 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad\": container with ID starting with f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad not found: ID does not exist" containerID="f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.951130 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad"} err="failed to get container status \"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad\": rpc error: code = NotFound desc = could not find container \"f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad\": container with ID starting with f8fe7b7c2a69e5ab05f4ee0a0e4c35581f41f64228b65e4077a977d8a0f6b7ad not found: ID does not exist" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.951152 4856 scope.go:117] "RemoveContainer" containerID="fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31" Dec 03 10:16:43 crc kubenswrapper[4856]: E1203 10:16:43.951600 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31\": container with ID starting with fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31 not found: ID does not exist" containerID="fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31" Dec 03 10:16:43 crc kubenswrapper[4856]: I1203 10:16:43.951632 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31"} err="failed to get container status \"fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31\": rpc error: code = NotFound desc = could not find container \"fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31\": container with ID starting with fbd29b8b5d4fa20472576d2ef25ac6cfae181cdc04cd4e8646b54a7b3805bb31 not found: ID does not exist" Dec 03 10:16:44 crc kubenswrapper[4856]: I1203 10:16:44.703933 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" path="/var/lib/kubelet/pods/1ea58e90-3dc8-4853-af64-29f584f254d6/volumes" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.159216 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t5pcp/must-gather-krrpp"] Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.159520 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t5pcp/must-gather-krrpp" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="copy" containerID="cri-o://ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852" gracePeriod=2 Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.166713 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t5pcp/must-gather-krrpp"] Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.591186 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t5pcp_must-gather-krrpp_569cca77-4758-410c-bf47-6b235c903dd0/copy/0.log" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.592315 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.693542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnw82\" (UniqueName: \"kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82\") pod \"569cca77-4758-410c-bf47-6b235c903dd0\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.693923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output\") pod \"569cca77-4758-410c-bf47-6b235c903dd0\" (UID: \"569cca77-4758-410c-bf47-6b235c903dd0\") " Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.700187 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82" (OuterVolumeSpecName: "kube-api-access-xnw82") pod "569cca77-4758-410c-bf47-6b235c903dd0" (UID: "569cca77-4758-410c-bf47-6b235c903dd0"). InnerVolumeSpecName "kube-api-access-xnw82". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.798298 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnw82\" (UniqueName: \"kubernetes.io/projected/569cca77-4758-410c-bf47-6b235c903dd0-kube-api-access-xnw82\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.842706 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "569cca77-4758-410c-bf47-6b235c903dd0" (UID: "569cca77-4758-410c-bf47-6b235c903dd0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.881154 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t5pcp_must-gather-krrpp_569cca77-4758-410c-bf47-6b235c903dd0/copy/0.log" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.881737 4856 generic.go:334] "Generic (PLEG): container finished" podID="569cca77-4758-410c-bf47-6b235c903dd0" containerID="ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852" exitCode=143 Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.881800 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t5pcp/must-gather-krrpp" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.881852 4856 scope.go:117] "RemoveContainer" containerID="ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.900575 4856 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/569cca77-4758-410c-bf47-6b235c903dd0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.907721 4856 scope.go:117] "RemoveContainer" containerID="b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.990669 4856 scope.go:117] "RemoveContainer" containerID="ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852" Dec 03 10:16:45 crc kubenswrapper[4856]: E1203 10:16:45.991127 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852\": container with ID starting with ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852 not found: ID does not exist" containerID="ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.991173 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852"} err="failed to get container status \"ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852\": rpc error: code = NotFound desc = could not find container \"ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852\": container with ID starting with ef17ce09e85d2fea4071077d0c630b99daf76723ff0a50ed59df88dcaed37852 not found: ID does not exist" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.991210 4856 scope.go:117] "RemoveContainer" containerID="b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f" Dec 03 10:16:45 crc kubenswrapper[4856]: E1203 10:16:45.991691 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f\": container with ID starting with b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f not found: ID does not exist" containerID="b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f" Dec 03 10:16:45 crc kubenswrapper[4856]: I1203 10:16:45.991722 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f"} err="failed to get container status \"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f\": rpc error: code = NotFound desc = could not find container \"b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f\": container with ID starting with b291e6fc3dc042875025a62adf0891b789a725d909c72501e41a9d8b0a162a3f not found: ID does not exist" Dec 03 10:16:46 crc kubenswrapper[4856]: I1203 10:16:46.702068 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569cca77-4758-410c-bf47-6b235c903dd0" path="/var/lib/kubelet/pods/569cca77-4758-410c-bf47-6b235c903dd0/volumes" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.758979 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.759668 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.759753 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.760786 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.760961 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" gracePeriod=600 Dec 03 10:16:52 crc kubenswrapper[4856]: E1203 10:16:52.907418 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.967347 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" exitCode=0 Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.967414 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a"} Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.967478 4856 scope.go:117] "RemoveContainer" containerID="4b58b7064155eef340606430eb8f8870517eb39d7cd758247003e1cf96011709" Dec 03 10:16:52 crc kubenswrapper[4856]: I1203 10:16:52.968475 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:16:52 crc kubenswrapper[4856]: E1203 10:16:52.968830 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:17:04 crc kubenswrapper[4856]: I1203 10:17:04.691334 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:17:04 crc kubenswrapper[4856]: E1203 10:17:04.694076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:17:17 crc kubenswrapper[4856]: I1203 10:17:17.689486 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:17:17 crc kubenswrapper[4856]: E1203 10:17:17.690429 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:17:32 crc kubenswrapper[4856]: I1203 10:17:32.702733 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:17:32 crc kubenswrapper[4856]: E1203 10:17:32.703841 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:17:47 crc kubenswrapper[4856]: I1203 10:17:47.690333 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:17:47 crc kubenswrapper[4856]: E1203 10:17:47.692446 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:18:01 crc kubenswrapper[4856]: I1203 10:18:01.689158 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:18:01 crc kubenswrapper[4856]: E1203 10:18:01.690235 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:18:14 crc kubenswrapper[4856]: I1203 10:18:14.689886 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:18:14 crc kubenswrapper[4856]: E1203 10:18:14.690870 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:18:27 crc kubenswrapper[4856]: I1203 10:18:27.688751 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:18:27 crc kubenswrapper[4856]: E1203 10:18:27.689528 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:18:39 crc kubenswrapper[4856]: I1203 10:18:39.943863 4856 scope.go:117] "RemoveContainer" containerID="71b9068c76bf375a91720afbb2133d3e2bffd200fa5c2190dbef1d283659f6d6" Dec 03 10:18:39 crc kubenswrapper[4856]: I1203 10:18:39.979029 4856 scope.go:117] "RemoveContainer" containerID="0d0a210afd3f92286c1de291019d5e03ab658d5657ac5ee970696e5eaeb30df0" Dec 03 10:18:40 crc kubenswrapper[4856]: I1203 10:18:40.689401 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:18:40 crc kubenswrapper[4856]: E1203 10:18:40.689931 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:18:51 crc kubenswrapper[4856]: I1203 10:18:51.689638 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:18:51 crc kubenswrapper[4856]: E1203 10:18:51.690558 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:06 crc kubenswrapper[4856]: I1203 10:19:06.689543 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:19:06 crc kubenswrapper[4856]: E1203 10:19:06.690425 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:17 crc kubenswrapper[4856]: I1203 10:19:17.690187 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:19:17 crc kubenswrapper[4856]: E1203 10:19:17.691328 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:32 crc kubenswrapper[4856]: I1203 10:19:32.694511 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:19:32 crc kubenswrapper[4856]: E1203 10:19:32.695256 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:44 crc kubenswrapper[4856]: I1203 10:19:44.707209 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:19:44 crc kubenswrapper[4856]: E1203 10:19:44.709187 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.024728 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:19:46 crc kubenswrapper[4856]: E1203 10:19:46.026314 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="gather" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.026416 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="gather" Dec 03 10:19:46 crc kubenswrapper[4856]: E1203 10:19:46.026519 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="extract-content" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.026598 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="extract-content" Dec 03 10:19:46 crc kubenswrapper[4856]: E1203 10:19:46.026684 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="registry-server" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.026762 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="registry-server" Dec 03 10:19:46 crc kubenswrapper[4856]: E1203 10:19:46.026883 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="copy" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.026966 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="copy" Dec 03 10:19:46 crc kubenswrapper[4856]: E1203 10:19:46.027082 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="extract-utilities" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.027165 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="extract-utilities" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.027610 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="gather" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.027714 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="569cca77-4758-410c-bf47-6b235c903dd0" containerName="copy" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.027910 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea58e90-3dc8-4853-af64-29f584f254d6" containerName="registry-server" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.029922 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.042126 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.151493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htmw\" (UniqueName: \"kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.151615 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.151724 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.253235 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6htmw\" (UniqueName: \"kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.253355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.253386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.253947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.254093 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.275983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htmw\" (UniqueName: \"kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw\") pod \"community-operators-fnhv2\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:46 crc kubenswrapper[4856]: I1203 10:19:46.364615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:47 crc kubenswrapper[4856]: I1203 10:19:47.009133 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:19:47 crc kubenswrapper[4856]: I1203 10:19:47.891251 4856 generic.go:334] "Generic (PLEG): container finished" podID="6e42fde5-52bf-41ce-926a-a56426a92304" containerID="7fb0fe690f9f11c181834d1ce219714df018d016b8de60bd52df1d6818500f73" exitCode=0 Dec 03 10:19:47 crc kubenswrapper[4856]: I1203 10:19:47.891308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerDied","Data":"7fb0fe690f9f11c181834d1ce219714df018d016b8de60bd52df1d6818500f73"} Dec 03 10:19:47 crc kubenswrapper[4856]: I1203 10:19:47.891588 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerStarted","Data":"e7c798579c1f8e073ff7a3faaff6a4e81d00917f0af4544fc6730755cf2b155c"} Dec 03 10:19:48 crc kubenswrapper[4856]: I1203 10:19:48.980955 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerStarted","Data":"5b1a1322086649065b8fb7411efa048a6f70c60e06d2294d60825500c7b336cb"} Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.346429 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8brc7/must-gather-njr5v"] Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.348210 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.352892 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8brc7"/"kube-root-ca.crt" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.355024 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8brc7"/"default-dockercfg-6cdvk" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.360008 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8brc7"/"openshift-service-ca.crt" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.375369 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8brc7/must-gather-njr5v"] Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.388068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.388177 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldbk\" (UniqueName: \"kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.490602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.490766 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldbk\" (UniqueName: \"kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.491386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.516753 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldbk\" (UniqueName: \"kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk\") pod \"must-gather-njr5v\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.669714 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.997435 4856 generic.go:334] "Generic (PLEG): container finished" podID="6e42fde5-52bf-41ce-926a-a56426a92304" containerID="5b1a1322086649065b8fb7411efa048a6f70c60e06d2294d60825500c7b336cb" exitCode=0 Dec 03 10:19:49 crc kubenswrapper[4856]: I1203 10:19:49.997487 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerDied","Data":"5b1a1322086649065b8fb7411efa048a6f70c60e06d2294d60825500c7b336cb"} Dec 03 10:19:50 crc kubenswrapper[4856]: I1203 10:19:50.170441 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8brc7/must-gather-njr5v"] Dec 03 10:19:50 crc kubenswrapper[4856]: W1203 10:19:50.465217 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf50de991_1f76_49e4_bf5c_e9d32a05986c.slice/crio-07054f49c7a5105fc7077ae6d7c82269eb5515357f2bef8b0803aae55d2bb3e6 WatchSource:0}: Error finding container 07054f49c7a5105fc7077ae6d7c82269eb5515357f2bef8b0803aae55d2bb3e6: Status 404 returned error can't find the container with id 07054f49c7a5105fc7077ae6d7c82269eb5515357f2bef8b0803aae55d2bb3e6 Dec 03 10:19:51 crc kubenswrapper[4856]: I1203 10:19:51.032589 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/must-gather-njr5v" event={"ID":"f50de991-1f76-49e4-bf5c-e9d32a05986c","Type":"ContainerStarted","Data":"20455aeb776f4a04504eff5606b3eab9fccf045561cfd4d665edb4c56dcdf7ca"} Dec 03 10:19:51 crc kubenswrapper[4856]: I1203 10:19:51.032982 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/must-gather-njr5v" event={"ID":"f50de991-1f76-49e4-bf5c-e9d32a05986c","Type":"ContainerStarted","Data":"07054f49c7a5105fc7077ae6d7c82269eb5515357f2bef8b0803aae55d2bb3e6"} Dec 03 10:19:52 crc kubenswrapper[4856]: I1203 10:19:52.043590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/must-gather-njr5v" event={"ID":"f50de991-1f76-49e4-bf5c-e9d32a05986c","Type":"ContainerStarted","Data":"3b294895fa8859df49a2c26503767c6cf537625d2e5c4c63391eb92e0d5e1c27"} Dec 03 10:19:52 crc kubenswrapper[4856]: I1203 10:19:52.047819 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerStarted","Data":"5355bbdf30af3ec42a6d5b99760ccbf8d42f8919abf441d18456e730622892c1"} Dec 03 10:19:52 crc kubenswrapper[4856]: I1203 10:19:52.096448 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fnhv2" podStartSLOduration=4.422326757 podStartE2EDuration="7.096418836s" podCreationTimestamp="2025-12-03 10:19:45 +0000 UTC" firstStartedPulling="2025-12-03 10:19:47.896241188 +0000 UTC m=+4056.079133499" lastFinishedPulling="2025-12-03 10:19:50.570333277 +0000 UTC m=+4058.753225578" observedRunningTime="2025-12-03 10:19:52.089554756 +0000 UTC m=+4060.272447057" watchObservedRunningTime="2025-12-03 10:19:52.096418836 +0000 UTC m=+4060.279311137" Dec 03 10:19:52 crc kubenswrapper[4856]: I1203 10:19:52.098881 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8brc7/must-gather-njr5v" podStartSLOduration=3.098870477 podStartE2EDuration="3.098870477s" podCreationTimestamp="2025-12-03 10:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:19:52.065960012 +0000 UTC m=+4060.248852323" watchObservedRunningTime="2025-12-03 10:19:52.098870477 +0000 UTC m=+4060.281762778" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.307352 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8brc7/crc-debug-hgcmc"] Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.309515 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.449155 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.449249 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdcgr\" (UniqueName: \"kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.551939 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.552039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdcgr\" (UniqueName: \"kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.552463 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.571457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdcgr\" (UniqueName: \"kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr\") pod \"crc-debug-hgcmc\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:55 crc kubenswrapper[4856]: I1203 10:19:55.633182 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:19:56 crc kubenswrapper[4856]: I1203 10:19:56.085187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" event={"ID":"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3","Type":"ContainerStarted","Data":"053f8e0e92daa403d22bf5811dfaa1901777714afb76f2b1f0ec9bd33a8b16f1"} Dec 03 10:19:56 crc kubenswrapper[4856]: I1203 10:19:56.365537 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:56 crc kubenswrapper[4856]: I1203 10:19:56.365602 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:56 crc kubenswrapper[4856]: I1203 10:19:56.415516 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:57 crc kubenswrapper[4856]: I1203 10:19:57.096060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" event={"ID":"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3","Type":"ContainerStarted","Data":"6a2aabc727bc575d48ce8ad5ed5d5b4a8557283f11807512178bd842e7cc556f"} Dec 03 10:19:57 crc kubenswrapper[4856]: I1203 10:19:57.114088 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" podStartSLOduration=2.11406458 podStartE2EDuration="2.11406458s" podCreationTimestamp="2025-12-03 10:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:19:57.111815745 +0000 UTC m=+4065.294708056" watchObservedRunningTime="2025-12-03 10:19:57.11406458 +0000 UTC m=+4065.296956881" Dec 03 10:19:57 crc kubenswrapper[4856]: I1203 10:19:57.181986 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:19:57 crc kubenswrapper[4856]: I1203 10:19:57.231421 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:19:57 crc kubenswrapper[4856]: I1203 10:19:57.689017 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:19:57 crc kubenswrapper[4856]: E1203 10:19:57.689569 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:19:59 crc kubenswrapper[4856]: I1203 10:19:59.132282 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fnhv2" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="registry-server" containerID="cri-o://5355bbdf30af3ec42a6d5b99760ccbf8d42f8919abf441d18456e730622892c1" gracePeriod=2 Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.155992 4856 generic.go:334] "Generic (PLEG): container finished" podID="6e42fde5-52bf-41ce-926a-a56426a92304" containerID="5355bbdf30af3ec42a6d5b99760ccbf8d42f8919abf441d18456e730622892c1" exitCode=0 Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.156044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerDied","Data":"5355bbdf30af3ec42a6d5b99760ccbf8d42f8919abf441d18456e730622892c1"} Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.574456 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.670785 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities\") pod \"6e42fde5-52bf-41ce-926a-a56426a92304\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.671043 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htmw\" (UniqueName: \"kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw\") pod \"6e42fde5-52bf-41ce-926a-a56426a92304\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.671243 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content\") pod \"6e42fde5-52bf-41ce-926a-a56426a92304\" (UID: \"6e42fde5-52bf-41ce-926a-a56426a92304\") " Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.671632 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities" (OuterVolumeSpecName: "utilities") pod "6e42fde5-52bf-41ce-926a-a56426a92304" (UID: "6e42fde5-52bf-41ce-926a-a56426a92304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.679097 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw" (OuterVolumeSpecName: "kube-api-access-6htmw") pod "6e42fde5-52bf-41ce-926a-a56426a92304" (UID: "6e42fde5-52bf-41ce-926a-a56426a92304"). InnerVolumeSpecName "kube-api-access-6htmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.722208 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e42fde5-52bf-41ce-926a-a56426a92304" (UID: "6e42fde5-52bf-41ce-926a-a56426a92304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.773561 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.774275 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e42fde5-52bf-41ce-926a-a56426a92304-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:01 crc kubenswrapper[4856]: I1203 10:20:01.774304 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6htmw\" (UniqueName: \"kubernetes.io/projected/6e42fde5-52bf-41ce-926a-a56426a92304-kube-api-access-6htmw\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.167701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnhv2" event={"ID":"6e42fde5-52bf-41ce-926a-a56426a92304","Type":"ContainerDied","Data":"e7c798579c1f8e073ff7a3faaff6a4e81d00917f0af4544fc6730755cf2b155c"} Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.167762 4856 scope.go:117] "RemoveContainer" containerID="5355bbdf30af3ec42a6d5b99760ccbf8d42f8919abf441d18456e730622892c1" Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.167951 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnhv2" Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.215983 4856 scope.go:117] "RemoveContainer" containerID="5b1a1322086649065b8fb7411efa048a6f70c60e06d2294d60825500c7b336cb" Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.219595 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.232653 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fnhv2"] Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.241254 4856 scope.go:117] "RemoveContainer" containerID="7fb0fe690f9f11c181834d1ce219714df018d016b8de60bd52df1d6818500f73" Dec 03 10:20:02 crc kubenswrapper[4856]: I1203 10:20:02.702793 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" path="/var/lib/kubelet/pods/6e42fde5-52bf-41ce-926a-a56426a92304/volumes" Dec 03 10:20:11 crc kubenswrapper[4856]: I1203 10:20:11.689599 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:20:11 crc kubenswrapper[4856]: E1203 10:20:11.690279 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:20:24 crc kubenswrapper[4856]: I1203 10:20:24.688678 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:20:24 crc kubenswrapper[4856]: E1203 10:20:24.690830 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:20:39 crc kubenswrapper[4856]: I1203 10:20:39.689468 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:20:39 crc kubenswrapper[4856]: E1203 10:20:39.690259 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:20:40 crc kubenswrapper[4856]: I1203 10:20:40.540826 4856 generic.go:334] "Generic (PLEG): container finished" podID="c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" containerID="6a2aabc727bc575d48ce8ad5ed5d5b4a8557283f11807512178bd842e7cc556f" exitCode=0 Dec 03 10:20:40 crc kubenswrapper[4856]: I1203 10:20:40.540863 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" event={"ID":"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3","Type":"ContainerDied","Data":"6a2aabc727bc575d48ce8ad5ed5d5b4a8557283f11807512178bd842e7cc556f"} Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.669747 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.712964 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-hgcmc"] Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.723878 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-hgcmc"] Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.782917 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host\") pod \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.783100 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdcgr\" (UniqueName: \"kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr\") pod \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\" (UID: \"c197d715-d4ca-45b9-8e37-0bc7f48d5ff3\") " Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.784118 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host" (OuterVolumeSpecName: "host") pod "c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" (UID: "c197d715-d4ca-45b9-8e37-0bc7f48d5ff3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.794051 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr" (OuterVolumeSpecName: "kube-api-access-fdcgr") pod "c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" (UID: "c197d715-d4ca-45b9-8e37-0bc7f48d5ff3"). InnerVolumeSpecName "kube-api-access-fdcgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.886168 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:41 crc kubenswrapper[4856]: I1203 10:20:41.886221 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdcgr\" (UniqueName: \"kubernetes.io/projected/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3-kube-api-access-fdcgr\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.560855 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053f8e0e92daa403d22bf5811dfaa1901777714afb76f2b1f0ec9bd33a8b16f1" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.560945 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-hgcmc" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.710700 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" path="/var/lib/kubelet/pods/c197d715-d4ca-45b9-8e37-0bc7f48d5ff3/volumes" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.974133 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8brc7/crc-debug-sz9cp"] Dec 03 10:20:42 crc kubenswrapper[4856]: E1203 10:20:42.975003 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="registry-server" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975035 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="registry-server" Dec 03 10:20:42 crc kubenswrapper[4856]: E1203 10:20:42.975075 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="extract-content" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975090 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="extract-content" Dec 03 10:20:42 crc kubenswrapper[4856]: E1203 10:20:42.975119 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" containerName="container-00" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975130 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" containerName="container-00" Dec 03 10:20:42 crc kubenswrapper[4856]: E1203 10:20:42.975157 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="extract-utilities" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975168 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="extract-utilities" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975491 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e42fde5-52bf-41ce-926a-a56426a92304" containerName="registry-server" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.975528 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c197d715-d4ca-45b9-8e37-0bc7f48d5ff3" containerName="container-00" Dec 03 10:20:42 crc kubenswrapper[4856]: I1203 10:20:42.976555 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.111461 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.111617 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwnm\" (UniqueName: \"kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.213371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwnm\" (UniqueName: \"kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.213547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.213653 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.238541 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwnm\" (UniqueName: \"kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm\") pod \"crc-debug-sz9cp\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.299651 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.570460 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" event={"ID":"e437f3d3-53fd-4b81-aba6-c2b493de4d19","Type":"ContainerStarted","Data":"5bfc276247a07d1327c8fba9b17472955e23019a7ff5e0c392cdf812e80d1adf"} Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.570866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" event={"ID":"e437f3d3-53fd-4b81-aba6-c2b493de4d19","Type":"ContainerStarted","Data":"00b447343f6a9411c59107758aa85e37c909ab6008d879173db09dc5b661ff8e"} Dec 03 10:20:43 crc kubenswrapper[4856]: I1203 10:20:43.590228 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" podStartSLOduration=1.590208389 podStartE2EDuration="1.590208389s" podCreationTimestamp="2025-12-03 10:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:20:43.583729788 +0000 UTC m=+4111.766622109" watchObservedRunningTime="2025-12-03 10:20:43.590208389 +0000 UTC m=+4111.773100690" Dec 03 10:20:44 crc kubenswrapper[4856]: I1203 10:20:44.583217 4856 generic.go:334] "Generic (PLEG): container finished" podID="e437f3d3-53fd-4b81-aba6-c2b493de4d19" containerID="5bfc276247a07d1327c8fba9b17472955e23019a7ff5e0c392cdf812e80d1adf" exitCode=0 Dec 03 10:20:44 crc kubenswrapper[4856]: I1203 10:20:44.583289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" event={"ID":"e437f3d3-53fd-4b81-aba6-c2b493de4d19","Type":"ContainerDied","Data":"5bfc276247a07d1327c8fba9b17472955e23019a7ff5e0c392cdf812e80d1adf"} Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.702747 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.745524 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-sz9cp"] Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.760369 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-sz9cp"] Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.761495 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmwnm\" (UniqueName: \"kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm\") pod \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.761536 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host\") pod \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\" (UID: \"e437f3d3-53fd-4b81-aba6-c2b493de4d19\") " Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.761644 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host" (OuterVolumeSpecName: "host") pod "e437f3d3-53fd-4b81-aba6-c2b493de4d19" (UID: "e437f3d3-53fd-4b81-aba6-c2b493de4d19"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.762197 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e437f3d3-53fd-4b81-aba6-c2b493de4d19-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.769165 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm" (OuterVolumeSpecName: "kube-api-access-zmwnm") pod "e437f3d3-53fd-4b81-aba6-c2b493de4d19" (UID: "e437f3d3-53fd-4b81-aba6-c2b493de4d19"). InnerVolumeSpecName "kube-api-access-zmwnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:20:45 crc kubenswrapper[4856]: I1203 10:20:45.863829 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmwnm\" (UniqueName: \"kubernetes.io/projected/e437f3d3-53fd-4b81-aba6-c2b493de4d19-kube-api-access-zmwnm\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.372182 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:20:46 crc kubenswrapper[4856]: E1203 10:20:46.375273 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e437f3d3-53fd-4b81-aba6-c2b493de4d19" containerName="container-00" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.375328 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e437f3d3-53fd-4b81-aba6-c2b493de4d19" containerName="container-00" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.376116 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e437f3d3-53fd-4b81-aba6-c2b493de4d19" containerName="container-00" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.379599 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.393384 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.477053 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2k6q\" (UniqueName: \"kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.477572 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.477619 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.579602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2k6q\" (UniqueName: \"kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.579970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.580066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.580981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.581041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.609030 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2k6q\" (UniqueName: \"kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q\") pod \"redhat-marketplace-ltvzn\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.627102 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00b447343f6a9411c59107758aa85e37c909ab6008d879173db09dc5b661ff8e" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.627368 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-sz9cp" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.700070 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e437f3d3-53fd-4b81-aba6-c2b493de4d19" path="/var/lib/kubelet/pods/e437f3d3-53fd-4b81-aba6-c2b493de4d19/volumes" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.703890 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:46 crc kubenswrapper[4856]: I1203 10:20:46.994862 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8brc7/crc-debug-wds54"] Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:46.998914 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.088440 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.088487 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cctj4\" (UniqueName: \"kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.187941 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.191325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.191366 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cctj4\" (UniqueName: \"kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.191993 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.212331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cctj4\" (UniqueName: \"kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4\") pod \"crc-debug-wds54\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.329441 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.637570 4856 generic.go:334] "Generic (PLEG): container finished" podID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerID="27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c" exitCode=0 Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.637642 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerDied","Data":"27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c"} Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.637946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerStarted","Data":"1f874794b528a6fc806b330a140f97494dfab2c8f7bf6328b41c40deeb7b8939"} Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.639665 4856 generic.go:334] "Generic (PLEG): container finished" podID="1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" containerID="d3e8b533096d3a87230801de085506be0f98a53aa7d02291827b4d48299a8e9a" exitCode=0 Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.639729 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-wds54" event={"ID":"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c","Type":"ContainerDied","Data":"d3e8b533096d3a87230801de085506be0f98a53aa7d02291827b4d48299a8e9a"} Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.639772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/crc-debug-wds54" event={"ID":"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c","Type":"ContainerStarted","Data":"cf53248eeb93a50f8a510ba7555faa7f133b37257316028043340e4841bfc360"} Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.722350 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-wds54"] Dec 03 10:20:47 crc kubenswrapper[4856]: I1203 10:20:47.730094 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8brc7/crc-debug-wds54"] Dec 03 10:20:48 crc kubenswrapper[4856]: I1203 10:20:48.814253 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:48 crc kubenswrapper[4856]: I1203 10:20:48.926516 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cctj4\" (UniqueName: \"kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4\") pod \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " Dec 03 10:20:48 crc kubenswrapper[4856]: I1203 10:20:48.926584 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host\") pod \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\" (UID: \"1575d1aa-4248-4dad-a58b-c1c5d85b6a8c\") " Dec 03 10:20:48 crc kubenswrapper[4856]: I1203 10:20:48.927234 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host" (OuterVolumeSpecName: "host") pod "1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" (UID: "1575d1aa-4248-4dad-a58b-c1c5d85b6a8c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:20:48 crc kubenswrapper[4856]: I1203 10:20:48.932781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4" (OuterVolumeSpecName: "kube-api-access-cctj4") pod "1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" (UID: "1575d1aa-4248-4dad-a58b-c1c5d85b6a8c"). InnerVolumeSpecName "kube-api-access-cctj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.028682 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cctj4\" (UniqueName: \"kubernetes.io/projected/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-kube-api-access-cctj4\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.028714 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c-host\") on node \"crc\" DevicePath \"\"" Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.661539 4856 scope.go:117] "RemoveContainer" containerID="d3e8b533096d3a87230801de085506be0f98a53aa7d02291827b4d48299a8e9a" Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.661562 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/crc-debug-wds54" Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.663883 4856 generic.go:334] "Generic (PLEG): container finished" podID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerID="81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff" exitCode=0 Dec 03 10:20:49 crc kubenswrapper[4856]: I1203 10:20:49.663920 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerDied","Data":"81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff"} Dec 03 10:20:50 crc kubenswrapper[4856]: I1203 10:20:50.674325 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerStarted","Data":"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1"} Dec 03 10:20:50 crc kubenswrapper[4856]: I1203 10:20:50.701207 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" path="/var/lib/kubelet/pods/1575d1aa-4248-4dad-a58b-c1c5d85b6a8c/volumes" Dec 03 10:20:50 crc kubenswrapper[4856]: I1203 10:20:50.701208 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ltvzn" podStartSLOduration=2.201954734 podStartE2EDuration="4.701187424s" podCreationTimestamp="2025-12-03 10:20:46 +0000 UTC" firstStartedPulling="2025-12-03 10:20:47.640160807 +0000 UTC m=+4115.823053148" lastFinishedPulling="2025-12-03 10:20:50.139393537 +0000 UTC m=+4118.322285838" observedRunningTime="2025-12-03 10:20:50.694113649 +0000 UTC m=+4118.877005960" watchObservedRunningTime="2025-12-03 10:20:50.701187424 +0000 UTC m=+4118.884079725" Dec 03 10:20:52 crc kubenswrapper[4856]: I1203 10:20:52.696994 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:20:52 crc kubenswrapper[4856]: E1203 10:20:52.697456 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:20:56 crc kubenswrapper[4856]: I1203 10:20:56.704377 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:56 crc kubenswrapper[4856]: I1203 10:20:56.704856 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:56 crc kubenswrapper[4856]: I1203 10:20:56.759821 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:56 crc kubenswrapper[4856]: I1203 10:20:56.813610 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:20:59 crc kubenswrapper[4856]: I1203 10:20:59.348701 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:20:59 crc kubenswrapper[4856]: I1203 10:20:59.349317 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ltvzn" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="registry-server" containerID="cri-o://99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1" gracePeriod=2 Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.356751 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.413411 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities\") pod \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.413689 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content\") pod \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.413775 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2k6q\" (UniqueName: \"kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q\") pod \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\" (UID: \"1519fc85-e08e-46d2-8937-1b1b0fe0d4da\") " Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.414930 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities" (OuterVolumeSpecName: "utilities") pod "1519fc85-e08e-46d2-8937-1b1b0fe0d4da" (UID: "1519fc85-e08e-46d2-8937-1b1b0fe0d4da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.418985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q" (OuterVolumeSpecName: "kube-api-access-q2k6q") pod "1519fc85-e08e-46d2-8937-1b1b0fe0d4da" (UID: "1519fc85-e08e-46d2-8937-1b1b0fe0d4da"). InnerVolumeSpecName "kube-api-access-q2k6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.437689 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1519fc85-e08e-46d2-8937-1b1b0fe0d4da" (UID: "1519fc85-e08e-46d2-8937-1b1b0fe0d4da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.516068 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.516104 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2k6q\" (UniqueName: \"kubernetes.io/projected/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-kube-api-access-q2k6q\") on node \"crc\" DevicePath \"\"" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.516120 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1519fc85-e08e-46d2-8937-1b1b0fe0d4da-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.762555 4856 generic.go:334] "Generic (PLEG): container finished" podID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerID="99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1" exitCode=0 Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.762612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerDied","Data":"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1"} Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.762647 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltvzn" event={"ID":"1519fc85-e08e-46d2-8937-1b1b0fe0d4da","Type":"ContainerDied","Data":"1f874794b528a6fc806b330a140f97494dfab2c8f7bf6328b41c40deeb7b8939"} Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.762672 4856 scope.go:117] "RemoveContainer" containerID="99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.762873 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltvzn" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.787053 4856 scope.go:117] "RemoveContainer" containerID="81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.812316 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.815704 4856 scope.go:117] "RemoveContainer" containerID="27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.824699 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltvzn"] Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.869074 4856 scope.go:117] "RemoveContainer" containerID="99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1" Dec 03 10:21:00 crc kubenswrapper[4856]: E1203 10:21:00.870195 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1\": container with ID starting with 99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1 not found: ID does not exist" containerID="99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.870252 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1"} err="failed to get container status \"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1\": rpc error: code = NotFound desc = could not find container \"99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1\": container with ID starting with 99aab502920addfb941c2cfc1aa85e3a6637091a86b682484336f877c590bcd1 not found: ID does not exist" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.870288 4856 scope.go:117] "RemoveContainer" containerID="81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff" Dec 03 10:21:00 crc kubenswrapper[4856]: E1203 10:21:00.870654 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff\": container with ID starting with 81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff not found: ID does not exist" containerID="81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.870710 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff"} err="failed to get container status \"81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff\": rpc error: code = NotFound desc = could not find container \"81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff\": container with ID starting with 81644650fed8df8bd88ce1ee3ebbb135b2a5d73f894b36e558dd22138a9452ff not found: ID does not exist" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.870750 4856 scope.go:117] "RemoveContainer" containerID="27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c" Dec 03 10:21:00 crc kubenswrapper[4856]: E1203 10:21:00.871061 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c\": container with ID starting with 27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c not found: ID does not exist" containerID="27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c" Dec 03 10:21:00 crc kubenswrapper[4856]: I1203 10:21:00.871094 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c"} err="failed to get container status \"27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c\": rpc error: code = NotFound desc = could not find container \"27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c\": container with ID starting with 27781c4ec96e4874363ce6508d4e9cf61815c6da74228c6cccc371c3668d3d6c not found: ID does not exist" Dec 03 10:21:02 crc kubenswrapper[4856]: I1203 10:21:02.700476 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" path="/var/lib/kubelet/pods/1519fc85-e08e-46d2-8937-1b1b0fe0d4da/volumes" Dec 03 10:21:04 crc kubenswrapper[4856]: I1203 10:21:04.730587 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f7df59b96-db6lp_dc7f85ba-81a1-4b35-8620-2c24b08b5101/barbican-api/0.log" Dec 03 10:21:04 crc kubenswrapper[4856]: I1203 10:21:04.890330 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f7df59b96-db6lp_dc7f85ba-81a1-4b35-8620-2c24b08b5101/barbican-api-log/0.log" Dec 03 10:21:04 crc kubenswrapper[4856]: I1203 10:21:04.958201 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-545b57f4f4-cmb44_a8e157e2-2dcf-4664-9b48-1e6186729ef0/barbican-keystone-listener/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.034608 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-545b57f4f4-cmb44_a8e157e2-2dcf-4664-9b48-1e6186729ef0/barbican-keystone-listener-log/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.137161 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68d6cb77d9-4m8kq_1a742807-921a-47f8-883b-10c4b972c350/barbican-worker/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.174341 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68d6cb77d9-4m8kq_1a742807-921a-47f8-883b-10c4b972c350/barbican-worker-log/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.353486 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bff68_1b94e685-696f-4e31-8296-a234c7767af2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.425149 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/ceilometer-central-agent/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.495149 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/ceilometer-notification-agent/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.546538 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/proxy-httpd/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.618871 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bf8c7439-4ac0-4c40-8d21-7804cb6010df/sg-core/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.743893 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34eb0c70-af06-4124-a1e5-fd6010205b6d/cinder-api/0.log" Dec 03 10:21:05 crc kubenswrapper[4856]: I1203 10:21:05.923897 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34eb0c70-af06-4124-a1e5-fd6010205b6d/cinder-api-log/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.056291 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e3e3b449-8b8e-497a-bccc-c2aa4c81861d/cinder-scheduler/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.111301 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e3e3b449-8b8e-497a-bccc-c2aa4c81861d/probe/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.250501 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8srhg_1031be18-e812-46f1-9377-792b9dd841c0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.314734 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hxnkh_a3c821c1-3ae0-4368-9ca3-703f7dfcfb9c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.465542 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/init/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.688929 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/dnsmasq-dns/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.690042 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:21:06 crc kubenswrapper[4856]: E1203 10:21:06.690352 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.693721 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-4htld_bcb163df-ea4f-4591-abf6-85b77b974458/init/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.728618 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vhfp2_a22816e8-f368-454d-93c3-762e0d5e88d7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.909835 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ebc0dc7-337a-46c5-ae8e-98ca475977a0/glance-httpd/0.log" Dec 03 10:21:06 crc kubenswrapper[4856]: I1203 10:21:06.941610 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ebc0dc7-337a-46c5-ae8e-98ca475977a0/glance-log/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.122342 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5006ab2-d2cb-45a1-b5b4-496b36d94bf2/glance-log/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.141457 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5006ab2-d2cb-45a1-b5b4-496b36d94bf2/glance-httpd/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.277362 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-569f95b-qhsts_7a3ced31-90f7-4932-999e-49e914166624/horizon/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.487595 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zwjq6_77973943-428f-4578-91c1-ed94f2616c7e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.670189 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2v6dt_eed4d3c5-8f3d-4fbb-8eb9-197302785490/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:07 crc kubenswrapper[4856]: I1203 10:21:07.728769 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-569f95b-qhsts_7a3ced31-90f7-4932-999e-49e914166624/horizon-log/0.log" Dec 03 10:21:08 crc kubenswrapper[4856]: I1203 10:21:08.267344 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29412601-p5k85_c97ba672-751d-4c49-b856-c5b4c6ead955/keystone-cron/0.log" Dec 03 10:21:08 crc kubenswrapper[4856]: I1203 10:21:08.421447 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-866db7fbbf-khsgj_a52f4628-4166-45bf-893f-98155011723d/keystone-api/0.log" Dec 03 10:21:08 crc kubenswrapper[4856]: I1203 10:21:08.483417 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_255f4336-240a-4793-88a0-a2f6da40c0b8/kube-state-metrics/0.log" Dec 03 10:21:08 crc kubenswrapper[4856]: I1203 10:21:08.666478 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4vvwz_484332af-13c0-4270-932a-181a6b3f879c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:09 crc kubenswrapper[4856]: I1203 10:21:09.002274 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7844d7bfd9-p972t_d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8/neutron-httpd/0.log" Dec 03 10:21:09 crc kubenswrapper[4856]: I1203 10:21:09.025700 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7844d7bfd9-p972t_d70d92ee-4e36-44a0-b6af-db4a7aa7d9f8/neutron-api/0.log" Dec 03 10:21:09 crc kubenswrapper[4856]: I1203 10:21:09.112285 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kfh49_e442bbaf-f226-4bed-a454-bbbaf90e44ff/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:09 crc kubenswrapper[4856]: I1203 10:21:09.644567 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1d29f7b-f4ed-4266-8713-a7252ca355fe/nova-api-log/0.log" Dec 03 10:21:09 crc kubenswrapper[4856]: I1203 10:21:09.735022 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_509448a9-9abb-4e44-b37f-79faeadec13e/nova-cell0-conductor-conductor/0.log" Dec 03 10:21:10 crc kubenswrapper[4856]: I1203 10:21:10.020678 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1d29f7b-f4ed-4266-8713-a7252ca355fe/nova-api-api/0.log" Dec 03 10:21:10 crc kubenswrapper[4856]: I1203 10:21:10.305763 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8b03be8b-cf3f-4194-9ebb-1bd5d2a91e6b/nova-cell1-conductor-conductor/0.log" Dec 03 10:21:10 crc kubenswrapper[4856]: I1203 10:21:10.456376 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7ec5a006-1571-475d-8f44-d12cb737563b/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 10:21:10 crc kubenswrapper[4856]: I1203 10:21:10.493413 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w9n5f_ebee317e-98e4-499f-91e9-fefdaa0dd0e3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:10 crc kubenswrapper[4856]: I1203 10:21:10.675329 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7384e37c-9204-4c80-9119-3c5454f32c80/nova-metadata-log/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.086209 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/mysql-bootstrap/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.115073 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b253f904-482d-4e19-b899-0304f9382759/nova-scheduler-scheduler/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.228487 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/mysql-bootstrap/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.313418 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e8b16ba2-5c26-4b5e-85ab-d99d915b68d0/galera/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.490695 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/mysql-bootstrap/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.678860 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/mysql-bootstrap/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.688457 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_643c316d-09f1-4aee-8d49-34989baaa50e/galera/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.834276 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b26ba3fd-c881-44a2-a613-17d2ee4da042/openstackclient/0.log" Dec 03 10:21:11 crc kubenswrapper[4856]: I1203 10:21:11.915260 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7tm2h_da1b289d-32ea-4bbb-a203-d208e0267f9b/ovn-controller/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.147371 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wcs7x_9a73fb24-fb9d-4037-b540-fcadcd423024/openstack-network-exporter/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.173153 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7384e37c-9204-4c80-9119-3c5454f32c80/nova-metadata-metadata/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.286268 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server-init/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.546150 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.546762 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovsdb-server-init/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.611114 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g4lq4_5816b942-fa92-48fe-a44a-279e02ae7c91/ovs-vswitchd/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.793768 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-95flr_37eb2f8b-1352-4ee3-9f78-afe97fd4ad90/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.820706 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b9109180-5009-4b2b-b2ff-b56e90bf72aa/ovn-northd/0.log" Dec 03 10:21:12 crc kubenswrapper[4856]: I1203 10:21:12.857169 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b9109180-5009-4b2b-b2ff-b56e90bf72aa/openstack-network-exporter/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.055489 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5596d8aa-639a-4e4f-8905-ceb3cbb622cd/openstack-network-exporter/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.068401 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5596d8aa-639a-4e4f-8905-ceb3cbb622cd/ovsdbserver-nb/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.262567 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9dcb6c56-c540-463d-a481-0de5eb693e2b/openstack-network-exporter/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.364083 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9dcb6c56-c540-463d-a481-0de5eb693e2b/ovsdbserver-sb/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.456615 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-549d5987fb-kphsk_f92cb955-92c8-46d4-adbf-f8de7330cd2c/placement-api/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.570229 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-549d5987fb-kphsk_f92cb955-92c8-46d4-adbf-f8de7330cd2c/placement-log/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.625458 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/setup-container/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.890909 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/setup-container/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.891671 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/setup-container/0.log" Dec 03 10:21:13 crc kubenswrapper[4856]: I1203 10:21:13.900982 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8351e71a-ffb6-4596-8edb-05855ea7c503/rabbitmq/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.100229 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/setup-container/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.164451 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jwvsq_161b93b3-c4c6-4f99-a419-f00bed34b046/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.171929 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_faec2efa-e052-4325-bd97-cbd806f725fa/rabbitmq/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.414779 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gbhrt_c0e24ff8-9624-4934-852f-57249413e4ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.459422 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lmz6s_1fc94321-b8f5-471b-9114-c93f984f9ac7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.647616 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ksn7f_2cb5d44f-36f7-4bf5-b688-ed331f254afd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:14 crc kubenswrapper[4856]: I1203 10:21:14.700019 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6k8w6_20a18377-44d6-4f4e-b2a4-24470b9bf24e/ssh-known-hosts-edpm-deployment/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.025618 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d5fb5d859-8njp2_295f1863-c8b3-4e9a-b09f-24c393ac167c/proxy-server/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.030370 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d5fb5d859-8njp2_295f1863-c8b3-4e9a-b09f-24c393ac167c/proxy-httpd/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.245604 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-frh7v_320b56b0-4905-4a31-bc37-13106b993909/swift-ring-rebalance/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.265512 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-auditor/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.532660 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-reaper/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.698598 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-replicator/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.765122 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-auditor/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.786872 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/account-server/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.853375 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-replicator/0.log" Dec 03 10:21:15 crc kubenswrapper[4856]: I1203 10:21:15.940715 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-server/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.004762 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/container-updater/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.043559 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-auditor/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.073899 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-expirer/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.180176 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-replicator/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.240326 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-server/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.267116 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/object-updater/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.278208 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/rsync/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.416249 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9a29fb43-ed6d-499a-a4f7-b847de3dbf71/swift-recon-cron/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.553675 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lp5nm_f61fc35d-84b0-4d7c-8567-5457a1adfc58/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.688475 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_105b5a9b-c81b-43d5-bea0-7bfd062ed807/tempest-tests-tempest-tests-runner/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.823751 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9cbd1b10-8bcd-4759-8c6a-2db25e06eadd/test-operator-logs-container/0.log" Dec 03 10:21:16 crc kubenswrapper[4856]: I1203 10:21:16.927512 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bdzlp_6cdb1761-f836-42e1-a1d2-e52ccf41594b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 10:21:19 crc kubenswrapper[4856]: I1203 10:21:19.692086 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:21:19 crc kubenswrapper[4856]: E1203 10:21:19.692643 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:21:27 crc kubenswrapper[4856]: I1203 10:21:27.161632 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5025473d-5c66-4550-90f1-5e4988fcbd9e/memcached/0.log" Dec 03 10:21:30 crc kubenswrapper[4856]: I1203 10:21:30.689143 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:21:30 crc kubenswrapper[4856]: E1203 10:21:30.689672 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:21:41 crc kubenswrapper[4856]: I1203 10:21:41.689116 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:21:41 crc kubenswrapper[4856]: E1203 10:21:41.690626 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.340718 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.477040 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.556912 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.583207 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.754253 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/util/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.778311 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/extract/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.791315 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_464a6078898499d62f261e598549b500e9e20d1f15505973d1dacb2d02dmz7d_01d8caec-af0a-4a43-97d4-b790eb73850c/pull/0.log" Dec 03 10:21:46 crc kubenswrapper[4856]: I1203 10:21:46.941427 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4m4j5_ab444944-5290-47a9-a2ca-8c544c5350b6/kube-rbac-proxy/0.log" Dec 03 10:21:47 crc kubenswrapper[4856]: I1203 10:21:47.105184 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4m4j5_ab444944-5290-47a9-a2ca-8c544c5350b6/manager/0.log" Dec 03 10:21:47 crc kubenswrapper[4856]: I1203 10:21:47.110000 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cw2wq_445299d7-37a7-4fa0-a50c-e81643492293/kube-rbac-proxy/0.log" Dec 03 10:21:47 crc kubenswrapper[4856]: I1203 10:21:47.241674 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-cw2wq_445299d7-37a7-4fa0-a50c-e81643492293/manager/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.048643 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2j5bq_65f07af7-c89a-403d-866e-f98462398697/manager/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.061362 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2j5bq_65f07af7-c89a-403d-866e-f98462398697/kube-rbac-proxy/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.291840 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bmn9s_0f952368-5565-442c-8bcb-aa61130cb3c7/kube-rbac-proxy/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.363311 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bmn9s_0f952368-5565-442c-8bcb-aa61130cb3c7/manager/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.427374 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8qjgb_61066eb5-99e6-4ec9-9dea-3d2ecd8d456e/kube-rbac-proxy/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.491994 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-8qjgb_61066eb5-99e6-4ec9-9dea-3d2ecd8d456e/manager/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.599224 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-88k4v_4a3f5eb0-4264-4034-8c1f-4d8b53af8b21/kube-rbac-proxy/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.638144 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-88k4v_4a3f5eb0-4264-4034-8c1f-4d8b53af8b21/manager/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.837311 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-x8vm4_d3707fa3-12a0-490e-baac-1fd0ce34fbd5/kube-rbac-proxy/0.log" Dec 03 10:21:48 crc kubenswrapper[4856]: I1203 10:21:48.994030 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-x8vm4_d3707fa3-12a0-490e-baac-1fd0ce34fbd5/manager/0.log" Dec 03 10:21:49 crc kubenswrapper[4856]: I1203 10:21:49.010550 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cg4vl_bb90cacb-d6f2-4e30-a694-21cccff0a5d1/kube-rbac-proxy/0.log" Dec 03 10:21:49 crc kubenswrapper[4856]: I1203 10:21:49.052540 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-cg4vl_bb90cacb-d6f2-4e30-a694-21cccff0a5d1/manager/0.log" Dec 03 10:21:49 crc kubenswrapper[4856]: I1203 10:21:49.195714 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-txg26_24007279-b1cb-4d5b-aca4-c55d0cd825b7/kube-rbac-proxy/0.log" Dec 03 10:21:49 crc kubenswrapper[4856]: I1203 10:21:49.274846 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-txg26_24007279-b1cb-4d5b-aca4-c55d0cd825b7/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.107112 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-xfpwp_0de81d3b-bbf7-455a-8842-2261010f69a2/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.111789 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-xfpwp_0de81d3b-bbf7-455a-8842-2261010f69a2/kube-rbac-proxy/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.123166 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-72dlj_352d270a-b735-411d-87ba-58719ee0f984/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.130399 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-72dlj_352d270a-b735-411d-87ba-58719ee0f984/kube-rbac-proxy/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.534613 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dhp69_e889c7da-b8e2-46bb-b700-f700e7e969bc/kube-rbac-proxy/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.573959 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jg8fc_1bfd8278-80a9-41ca-a89e-400e8b62188f/kube-rbac-proxy/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.603617 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dhp69_e889c7da-b8e2-46bb-b700-f700e7e969bc/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.820356 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jg8fc_1bfd8278-80a9-41ca-a89e-400e8b62188f/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.826472 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-dbfch_1c5cce87-5371-47df-8471-7725731c9908/manager/0.log" Dec 03 10:21:50 crc kubenswrapper[4856]: I1203 10:21:50.846155 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-dbfch_1c5cce87-5371-47df-8471-7725731c9908/kube-rbac-proxy/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.056843 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59_77383a17-c2e3-4f54-8296-414e707e2056/kube-rbac-proxy/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.100347 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4g8v59_77383a17-c2e3-4f54-8296-414e707e2056/manager/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.352380 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nj5zz_43d6261a-49c7-40ca-8403-1fa273ef863c/registry-server/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.605820 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-t6jdv_2513c74a-1905-4f71-bf3c-c71095d756d3/kube-rbac-proxy/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.632295 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bf68648df-qkkzg_d0f9ba14-7b89-4373-a0f6-67ceb97ffb71/operator/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.726050 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-t6jdv_2513c74a-1905-4f71-bf3c-c71095d756d3/manager/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.907590 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mkl9t_a13d32b8-0032-4c2b-9985-f865d89becdc/kube-rbac-proxy/0.log" Dec 03 10:21:51 crc kubenswrapper[4856]: I1203 10:21:51.944911 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-mkl9t_a13d32b8-0032-4c2b-9985-f865d89becdc/manager/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.035797 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rm6rm_25e84c2c-bca6-438b-ad4c-f7154e1ba97a/operator/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.182174 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-2rqwt_169cc116-1edd-4af9-b992-4bdb8e912231/kube-rbac-proxy/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.328464 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-2rqwt_169cc116-1edd-4af9-b992-4bdb8e912231/manager/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.381114 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-864649db6c-vtrz8_daec5857-0ffc-4499-af65-5f9d7ef6baf9/manager/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.427526 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kfkmm_38e8c4db-27d8-4ffa-98e8-0859bec1243c/kube-rbac-proxy/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.521033 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-kfkmm_38e8c4db-27d8-4ffa-98e8-0859bec1243c/manager/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.599774 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h5j7g_8e9597d6-d043-4377-9b86-cf94a5df8ddf/manager/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.649028 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h5j7g_8e9597d6-d043-4377-9b86-cf94a5df8ddf/kube-rbac-proxy/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.780691 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-nccbk_d82e1df1-d3d5-4f54-874f-291e3d82aac6/kube-rbac-proxy/0.log" Dec 03 10:21:52 crc kubenswrapper[4856]: I1203 10:21:52.817570 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-nccbk_d82e1df1-d3d5-4f54-874f-291e3d82aac6/manager/0.log" Dec 03 10:21:56 crc kubenswrapper[4856]: I1203 10:21:56.688957 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:21:57 crc kubenswrapper[4856]: I1203 10:21:57.437578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf"} Dec 03 10:22:16 crc kubenswrapper[4856]: I1203 10:22:16.027633 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-85pdd_7b9e41f7-3b5b-461d-b0d9-a28daec02d37/control-plane-machine-set-operator/0.log" Dec 03 10:22:16 crc kubenswrapper[4856]: I1203 10:22:16.234139 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4k754_f3c06506-5c89-4e8b-92c2-c4886d17b6df/machine-api-operator/0.log" Dec 03 10:22:16 crc kubenswrapper[4856]: I1203 10:22:16.262597 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4k754_f3c06506-5c89-4e8b-92c2-c4886d17b6df/kube-rbac-proxy/0.log" Dec 03 10:22:30 crc kubenswrapper[4856]: I1203 10:22:30.316391 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-9rpmm_1f803911-3cdc-40bf-8849-0a94fdf62f5c/cert-manager-controller/0.log" Dec 03 10:22:30 crc kubenswrapper[4856]: I1203 10:22:30.492572 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-x7sf7_d69625a4-8ce2-415d-ae2f-e0b5e3e63c96/cert-manager-cainjector/0.log" Dec 03 10:22:30 crc kubenswrapper[4856]: I1203 10:22:30.530156 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-84d4p_a0555a5a-1ddc-46ae-b98a-7e4baa736e35/cert-manager-webhook/0.log" Dec 03 10:22:40 crc kubenswrapper[4856]: I1203 10:22:40.190681 4856 scope.go:117] "RemoveContainer" containerID="1b645224681bf77469c700f6a2cbc629de44e6bb308b648cdd1ff1f5ff0660eb" Dec 03 10:22:40 crc kubenswrapper[4856]: I1203 10:22:40.212208 4856 scope.go:117] "RemoveContainer" containerID="595d1da5ccf1c576179e674471ba1fdf459118adc6ed2a157bb5f3888232f889" Dec 03 10:22:40 crc kubenswrapper[4856]: I1203 10:22:40.236024 4856 scope.go:117] "RemoveContainer" containerID="c4db06c28d0b69c709489e65f617424e08d5c9bf372d454b102bb2666c71e413" Dec 03 10:22:45 crc kubenswrapper[4856]: I1203 10:22:45.706249 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-2ps5x_bca24c59-d3d7-42fb-a6b5-226bece344db/nmstate-console-plugin/0.log" Dec 03 10:22:45 crc kubenswrapper[4856]: I1203 10:22:45.838841 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gwwkk_df2e4dae-23c5-4f2a-978d-1e7293553f21/nmstate-handler/0.log" Dec 03 10:22:45 crc kubenswrapper[4856]: I1203 10:22:45.907607 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sckh8_a981397a-976a-4fc4-8cb9-af2d72410121/nmstate-metrics/0.log" Dec 03 10:22:45 crc kubenswrapper[4856]: I1203 10:22:45.926944 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sckh8_a981397a-976a-4fc4-8cb9-af2d72410121/kube-rbac-proxy/0.log" Dec 03 10:22:46 crc kubenswrapper[4856]: I1203 10:22:46.072489 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-88642_05257373-572f-4663-911d-8f50b368b390/nmstate-operator/0.log" Dec 03 10:22:46 crc kubenswrapper[4856]: I1203 10:22:46.176447 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-vw9qx_e91cc63e-1f90-4427-942e-fe3645f8ee86/nmstate-webhook/0.log" Dec 03 10:23:02 crc kubenswrapper[4856]: I1203 10:23:02.845742 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hpb7s_c049cfb9-a9ea-4348-88d4-40aacaf0c01a/kube-rbac-proxy/0.log" Dec 03 10:23:02 crc kubenswrapper[4856]: I1203 10:23:02.993871 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hpb7s_c049cfb9-a9ea-4348-88d4-40aacaf0c01a/controller/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.044036 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.235372 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.235753 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.268507 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.284936 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.462634 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.471999 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.478096 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.495630 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.649157 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-frr-files/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.656550 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-metrics/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.669984 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/cp-reloader/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.740439 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/controller/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.817513 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/kube-rbac-proxy/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.842055 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/frr-metrics/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.939026 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/kube-rbac-proxy-frr/0.log" Dec 03 10:23:03 crc kubenswrapper[4856]: I1203 10:23:03.989133 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/reloader/0.log" Dec 03 10:23:04 crc kubenswrapper[4856]: I1203 10:23:04.208570 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bcpx9_65ad74a6-7267-45f3-b6c2-898a3906758b/frr-k8s-webhook-server/0.log" Dec 03 10:23:04 crc kubenswrapper[4856]: I1203 10:23:04.374886 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67d6765696-kvt2f_b3ac3c18-dee0-4de8-8380-93060d971722/manager/0.log" Dec 03 10:23:04 crc kubenswrapper[4856]: I1203 10:23:04.483099 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-db87bc449-r6ztx_ae371cb4-e8b4-40ea-a590-884cf5feae1f/webhook-server/0.log" Dec 03 10:23:04 crc kubenswrapper[4856]: I1203 10:23:04.717917 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q26bm_0142725c-0a39-4e1c-bef6-a3027f105162/kube-rbac-proxy/0.log" Dec 03 10:23:05 crc kubenswrapper[4856]: I1203 10:23:05.164577 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q26bm_0142725c-0a39-4e1c-bef6-a3027f105162/speaker/0.log" Dec 03 10:23:05 crc kubenswrapper[4856]: I1203 10:23:05.332521 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-shn9z_d97a180e-36f0-45d7-b2de-7b92d84f26d8/frr/0.log" Dec 03 10:23:18 crc kubenswrapper[4856]: I1203 10:23:18.514277 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.226176 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.234348 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.249005 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.382951 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.399951 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/pull/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.433236 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fgx2xd_46751f58-5b43-408e-8722-5155ceba0ebb/extract/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.563689 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.761758 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.762997 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.784047 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.970755 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/util/0.log" Dec 03 10:23:19 crc kubenswrapper[4856]: I1203 10:23:19.980467 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/pull/0.log" Dec 03 10:23:20 crc kubenswrapper[4856]: I1203 10:23:20.007392 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836w7ss_3f5a808d-c561-4cf2-bf26-920fa9fe2d82/extract/0.log" Dec 03 10:23:20 crc kubenswrapper[4856]: I1203 10:23:20.159173 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:23:20 crc kubenswrapper[4856]: I1203 10:23:20.312679 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:23:20 crc kubenswrapper[4856]: I1203 10:23:20.346000 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:23:20 crc kubenswrapper[4856]: I1203 10:23:20.359423 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.192339 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-utilities/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.193854 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/extract-content/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.396789 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.491450 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8xlfp_867da068-dbc1-4ec2-a12d-f443846bebd8/registry-server/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.601976 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.623464 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.660257 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.806406 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-content/0.log" Dec 03 10:23:21 crc kubenswrapper[4856]: I1203 10:23:21.825716 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/extract-utilities/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.360612 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.364635 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7nlmw_87add44f-39e7-460b-9f01-d5aa27e44491/marketplace-operator/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.496001 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rk78k_65a38bf6-9f1d-45db-a007-c04bae553534/registry-server/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.541044 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.578481 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.590438 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.667199 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-utilities/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.700208 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/extract-content/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.823463 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:23:22 crc kubenswrapper[4856]: I1203 10:23:22.897609 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2dlb_0d5cdf16-9723-4454-9d50-01be3e7d70cc/registry-server/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.001577 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.008418 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.033274 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.203034 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-content/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.221527 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/extract-utilities/0.log" Dec 03 10:23:23 crc kubenswrapper[4856]: I1203 10:23:23.687370 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rvlfn_88a69196-dd0b-4747-b165-b72dcdfa48e4/registry-server/0.log" Dec 03 10:24:22 crc kubenswrapper[4856]: I1203 10:24:22.758464 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:24:22 crc kubenswrapper[4856]: I1203 10:24:22.759081 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:24:52 crc kubenswrapper[4856]: I1203 10:24:52.759164 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:24:52 crc kubenswrapper[4856]: I1203 10:24:52.759757 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:25:06 crc kubenswrapper[4856]: I1203 10:25:06.272175 4856 generic.go:334] "Generic (PLEG): container finished" podID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerID="20455aeb776f4a04504eff5606b3eab9fccf045561cfd4d665edb4c56dcdf7ca" exitCode=0 Dec 03 10:25:06 crc kubenswrapper[4856]: I1203 10:25:06.272388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8brc7/must-gather-njr5v" event={"ID":"f50de991-1f76-49e4-bf5c-e9d32a05986c","Type":"ContainerDied","Data":"20455aeb776f4a04504eff5606b3eab9fccf045561cfd4d665edb4c56dcdf7ca"} Dec 03 10:25:06 crc kubenswrapper[4856]: I1203 10:25:06.273355 4856 scope.go:117] "RemoveContainer" containerID="20455aeb776f4a04504eff5606b3eab9fccf045561cfd4d665edb4c56dcdf7ca" Dec 03 10:25:06 crc kubenswrapper[4856]: I1203 10:25:06.727064 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8brc7_must-gather-njr5v_f50de991-1f76-49e4-bf5c-e9d32a05986c/gather/0.log" Dec 03 10:25:17 crc kubenswrapper[4856]: I1203 10:25:17.992426 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8brc7/must-gather-njr5v"] Dec 03 10:25:17 crc kubenswrapper[4856]: I1203 10:25:17.993380 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8brc7/must-gather-njr5v" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="copy" containerID="cri-o://3b294895fa8859df49a2c26503767c6cf537625d2e5c4c63391eb92e0d5e1c27" gracePeriod=2 Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.001752 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8brc7/must-gather-njr5v"] Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.461150 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8brc7_must-gather-njr5v_f50de991-1f76-49e4-bf5c-e9d32a05986c/copy/0.log" Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.461799 4856 generic.go:334] "Generic (PLEG): container finished" podID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerID="3b294895fa8859df49a2c26503767c6cf537625d2e5c4c63391eb92e0d5e1c27" exitCode=143 Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.683731 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8brc7_must-gather-njr5v_f50de991-1f76-49e4-bf5c-e9d32a05986c/copy/0.log" Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.684103 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.820733 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldbk\" (UniqueName: \"kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk\") pod \"f50de991-1f76-49e4-bf5c-e9d32a05986c\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.820818 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output\") pod \"f50de991-1f76-49e4-bf5c-e9d32a05986c\" (UID: \"f50de991-1f76-49e4-bf5c-e9d32a05986c\") " Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.840267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk" (OuterVolumeSpecName: "kube-api-access-7ldbk") pod "f50de991-1f76-49e4-bf5c-e9d32a05986c" (UID: "f50de991-1f76-49e4-bf5c-e9d32a05986c"). InnerVolumeSpecName "kube-api-access-7ldbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.923402 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldbk\" (UniqueName: \"kubernetes.io/projected/f50de991-1f76-49e4-bf5c-e9d32a05986c-kube-api-access-7ldbk\") on node \"crc\" DevicePath \"\"" Dec 03 10:25:18 crc kubenswrapper[4856]: I1203 10:25:18.974124 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f50de991-1f76-49e4-bf5c-e9d32a05986c" (UID: "f50de991-1f76-49e4-bf5c-e9d32a05986c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:25:19 crc kubenswrapper[4856]: I1203 10:25:19.025609 4856 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f50de991-1f76-49e4-bf5c-e9d32a05986c-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 10:25:19 crc kubenswrapper[4856]: I1203 10:25:19.475030 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8brc7_must-gather-njr5v_f50de991-1f76-49e4-bf5c-e9d32a05986c/copy/0.log" Dec 03 10:25:19 crc kubenswrapper[4856]: I1203 10:25:19.475517 4856 scope.go:117] "RemoveContainer" containerID="3b294895fa8859df49a2c26503767c6cf537625d2e5c4c63391eb92e0d5e1c27" Dec 03 10:25:19 crc kubenswrapper[4856]: I1203 10:25:19.475556 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8brc7/must-gather-njr5v" Dec 03 10:25:19 crc kubenswrapper[4856]: I1203 10:25:19.499261 4856 scope.go:117] "RemoveContainer" containerID="20455aeb776f4a04504eff5606b3eab9fccf045561cfd4d665edb4c56dcdf7ca" Dec 03 10:25:20 crc kubenswrapper[4856]: I1203 10:25:20.702812 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" path="/var/lib/kubelet/pods/f50de991-1f76-49e4-bf5c-e9d32a05986c/volumes" Dec 03 10:25:22 crc kubenswrapper[4856]: I1203 10:25:22.759371 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:25:22 crc kubenswrapper[4856]: I1203 10:25:22.759461 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:25:22 crc kubenswrapper[4856]: I1203 10:25:22.759516 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:25:22 crc kubenswrapper[4856]: I1203 10:25:22.760348 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:25:22 crc kubenswrapper[4856]: I1203 10:25:22.760426 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf" gracePeriod=600 Dec 03 10:25:23 crc kubenswrapper[4856]: I1203 10:25:23.521158 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf" exitCode=0 Dec 03 10:25:23 crc kubenswrapper[4856]: I1203 10:25:23.521231 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf"} Dec 03 10:25:23 crc kubenswrapper[4856]: I1203 10:25:23.521427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerStarted","Data":"959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae"} Dec 03 10:25:23 crc kubenswrapper[4856]: I1203 10:25:23.521449 4856 scope.go:117] "RemoveContainer" containerID="895aea2b86736257a4069fe88545e76141e9d31a68bf57e614b680e4263ed23a" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.400168 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401260 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="registry-server" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401281 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="registry-server" Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401297 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="extract-content" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401305 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="extract-content" Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401313 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="extract-utilities" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401321 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="extract-utilities" Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401340 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="gather" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401348 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="gather" Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401358 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="copy" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401364 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="copy" Dec 03 10:26:27 crc kubenswrapper[4856]: E1203 10:26:27.401389 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" containerName="container-00" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401395 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" containerName="container-00" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401627 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="copy" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401648 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1519fc85-e08e-46d2-8937-1b1b0fe0d4da" containerName="registry-server" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401662 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50de991-1f76-49e4-bf5c-e9d32a05986c" containerName="gather" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.401680 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1575d1aa-4248-4dad-a58b-c1c5d85b6a8c" containerName="container-00" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.403582 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.411972 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.505506 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.505609 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2mm\" (UniqueName: \"kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.505693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.607697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.607768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2mm\" (UniqueName: \"kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.607853 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.608466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.608886 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.636782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2mm\" (UniqueName: \"kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm\") pod \"redhat-operators-vrpk9\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:27 crc kubenswrapper[4856]: I1203 10:26:27.724597 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:28 crc kubenswrapper[4856]: I1203 10:26:28.234779 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:29 crc kubenswrapper[4856]: I1203 10:26:29.134893 4856 generic.go:334] "Generic (PLEG): container finished" podID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerID="72a8eac62f5bae6b13832c0face251dfe579bf5b38d3757de3add24f9bbaf485" exitCode=0 Dec 03 10:26:29 crc kubenswrapper[4856]: I1203 10:26:29.134944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerDied","Data":"72a8eac62f5bae6b13832c0face251dfe579bf5b38d3757de3add24f9bbaf485"} Dec 03 10:26:29 crc kubenswrapper[4856]: I1203 10:26:29.134994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerStarted","Data":"c459e7a96e0d367d0d2f475c58dfd2c1233cd90a73f646d91c36307ab4763896"} Dec 03 10:26:29 crc kubenswrapper[4856]: I1203 10:26:29.138390 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 10:26:33 crc kubenswrapper[4856]: I1203 10:26:33.225474 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerStarted","Data":"5ee7193b1cc782c558c162fcd69120356c04d47e80b86f012258b1adddf2b085"} Dec 03 10:26:34 crc kubenswrapper[4856]: I1203 10:26:34.243055 4856 generic.go:334] "Generic (PLEG): container finished" podID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerID="5ee7193b1cc782c558c162fcd69120356c04d47e80b86f012258b1adddf2b085" exitCode=0 Dec 03 10:26:34 crc kubenswrapper[4856]: I1203 10:26:34.243112 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerDied","Data":"5ee7193b1cc782c558c162fcd69120356c04d47e80b86f012258b1adddf2b085"} Dec 03 10:26:38 crc kubenswrapper[4856]: I1203 10:26:38.279903 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerStarted","Data":"5a510750bc533398a96969383336314babada26fdcb77d77fc369205b2796178"} Dec 03 10:26:38 crc kubenswrapper[4856]: I1203 10:26:38.305242 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vrpk9" podStartSLOduration=3.095216685 podStartE2EDuration="11.305224613s" podCreationTimestamp="2025-12-03 10:26:27 +0000 UTC" firstStartedPulling="2025-12-03 10:26:29.138165148 +0000 UTC m=+4457.321057449" lastFinishedPulling="2025-12-03 10:26:37.348173076 +0000 UTC m=+4465.531065377" observedRunningTime="2025-12-03 10:26:38.303620703 +0000 UTC m=+4466.486513014" watchObservedRunningTime="2025-12-03 10:26:38.305224613 +0000 UTC m=+4466.488116914" Dec 03 10:26:40 crc kubenswrapper[4856]: I1203 10:26:40.426651 4856 scope.go:117] "RemoveContainer" containerID="6a2aabc727bc575d48ce8ad5ed5d5b4a8557283f11807512178bd842e7cc556f" Dec 03 10:26:47 crc kubenswrapper[4856]: I1203 10:26:47.725431 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:47 crc kubenswrapper[4856]: I1203 10:26:47.726070 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:47 crc kubenswrapper[4856]: I1203 10:26:47.790321 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:48 crc kubenswrapper[4856]: I1203 10:26:48.445328 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:48 crc kubenswrapper[4856]: I1203 10:26:48.505339 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:50 crc kubenswrapper[4856]: I1203 10:26:50.398664 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrpk9" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="registry-server" containerID="cri-o://5a510750bc533398a96969383336314babada26fdcb77d77fc369205b2796178" gracePeriod=2 Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.494768 4856 generic.go:334] "Generic (PLEG): container finished" podID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerID="5a510750bc533398a96969383336314babada26fdcb77d77fc369205b2796178" exitCode=0 Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.494854 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerDied","Data":"5a510750bc533398a96969383336314babada26fdcb77d77fc369205b2796178"} Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.495126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrpk9" event={"ID":"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f","Type":"ContainerDied","Data":"c459e7a96e0d367d0d2f475c58dfd2c1233cd90a73f646d91c36307ab4763896"} Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.495142 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c459e7a96e0d367d0d2f475c58dfd2c1233cd90a73f646d91c36307ab4763896" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.562347 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.736226 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content\") pod \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.736322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities\") pod \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.736542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2mm\" (UniqueName: \"kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm\") pod \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\" (UID: \"f1f0aaf1-a718-4e9a-b9e7-411c201aa39f\") " Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.737686 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities" (OuterVolumeSpecName: "utilities") pod "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" (UID: "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.743359 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm" (OuterVolumeSpecName: "kube-api-access-6z2mm") pod "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" (UID: "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f"). InnerVolumeSpecName "kube-api-access-6z2mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.841075 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.841146 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2mm\" (UniqueName: \"kubernetes.io/projected/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-kube-api-access-6z2mm\") on node \"crc\" DevicePath \"\"" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.843515 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" (UID: "f1f0aaf1-a718-4e9a-b9e7-411c201aa39f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.878187 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:26:52 crc kubenswrapper[4856]: E1203 10:26:52.878767 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="extract-content" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.878793 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="extract-content" Dec 03 10:26:52 crc kubenswrapper[4856]: E1203 10:26:52.878831 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="extract-utilities" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.878841 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="extract-utilities" Dec 03 10:26:52 crc kubenswrapper[4856]: E1203 10:26:52.878886 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="registry-server" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.878897 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="registry-server" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.879264 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" containerName="registry-server" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.881642 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.889901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.943618 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.944763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.944971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22wdh\" (UniqueName: \"kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:52 crc kubenswrapper[4856]: I1203 10:26:52.945232 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.046406 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.046458 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22wdh\" (UniqueName: \"kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.046507 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.047072 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.047112 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.067658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22wdh\" (UniqueName: \"kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh\") pod \"certified-operators-xvrgk\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.209343 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.515601 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrpk9" Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.540178 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.623116 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:53 crc kubenswrapper[4856]: I1203 10:26:53.630398 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vrpk9"] Dec 03 10:26:54 crc kubenswrapper[4856]: I1203 10:26:54.586803 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a96f76d-14cc-4326-8b07-77bc4fa32fd0" containerID="1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2" exitCode=0 Dec 03 10:26:54 crc kubenswrapper[4856]: I1203 10:26:54.586857 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerDied","Data":"1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2"} Dec 03 10:26:54 crc kubenswrapper[4856]: I1203 10:26:54.588562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerStarted","Data":"cf2df8b1f759d296678f8dce6dd7056e9cbabb0e0127e07f8fcbe5e0b713918b"} Dec 03 10:26:54 crc kubenswrapper[4856]: I1203 10:26:54.701333 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f0aaf1-a718-4e9a-b9e7-411c201aa39f" path="/var/lib/kubelet/pods/f1f0aaf1-a718-4e9a-b9e7-411c201aa39f/volumes" Dec 03 10:26:56 crc kubenswrapper[4856]: I1203 10:26:56.630261 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerStarted","Data":"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6"} Dec 03 10:26:57 crc kubenswrapper[4856]: I1203 10:26:57.643630 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a96f76d-14cc-4326-8b07-77bc4fa32fd0" containerID="585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6" exitCode=0 Dec 03 10:26:57 crc kubenswrapper[4856]: I1203 10:26:57.643858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerDied","Data":"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6"} Dec 03 10:27:00 crc kubenswrapper[4856]: I1203 10:27:00.700842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerStarted","Data":"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d"} Dec 03 10:27:00 crc kubenswrapper[4856]: I1203 10:27:00.724181 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvrgk" podStartSLOduration=3.407841785 podStartE2EDuration="8.724162221s" podCreationTimestamp="2025-12-03 10:26:52 +0000 UTC" firstStartedPulling="2025-12-03 10:26:54.589332262 +0000 UTC m=+4482.772224563" lastFinishedPulling="2025-12-03 10:26:59.905652698 +0000 UTC m=+4488.088544999" observedRunningTime="2025-12-03 10:27:00.713420192 +0000 UTC m=+4488.896312513" watchObservedRunningTime="2025-12-03 10:27:00.724162221 +0000 UTC m=+4488.907054522" Dec 03 10:27:03 crc kubenswrapper[4856]: I1203 10:27:03.210074 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:03 crc kubenswrapper[4856]: I1203 10:27:03.210468 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:03 crc kubenswrapper[4856]: I1203 10:27:03.262497 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:13 crc kubenswrapper[4856]: I1203 10:27:13.257671 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:13 crc kubenswrapper[4856]: I1203 10:27:13.312784 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:27:13 crc kubenswrapper[4856]: I1203 10:27:13.844773 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvrgk" podUID="9a96f76d-14cc-4326-8b07-77bc4fa32fd0" containerName="registry-server" containerID="cri-o://935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d" gracePeriod=2 Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.308419 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.479828 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities\") pod \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.479874 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content\") pod \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.479978 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22wdh\" (UniqueName: \"kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh\") pod \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\" (UID: \"9a96f76d-14cc-4326-8b07-77bc4fa32fd0\") " Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.481410 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities" (OuterVolumeSpecName: "utilities") pod "9a96f76d-14cc-4326-8b07-77bc4fa32fd0" (UID: "9a96f76d-14cc-4326-8b07-77bc4fa32fd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.497989 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh" (OuterVolumeSpecName: "kube-api-access-22wdh") pod "9a96f76d-14cc-4326-8b07-77bc4fa32fd0" (UID: "9a96f76d-14cc-4326-8b07-77bc4fa32fd0"). InnerVolumeSpecName "kube-api-access-22wdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.535059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a96f76d-14cc-4326-8b07-77bc4fa32fd0" (UID: "9a96f76d-14cc-4326-8b07-77bc4fa32fd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.581653 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.581688 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.581699 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22wdh\" (UniqueName: \"kubernetes.io/projected/9a96f76d-14cc-4326-8b07-77bc4fa32fd0-kube-api-access-22wdh\") on node \"crc\" DevicePath \"\"" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.860692 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a96f76d-14cc-4326-8b07-77bc4fa32fd0" containerID="935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d" exitCode=0 Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.860771 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerDied","Data":"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d"} Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.860846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvrgk" event={"ID":"9a96f76d-14cc-4326-8b07-77bc4fa32fd0","Type":"ContainerDied","Data":"cf2df8b1f759d296678f8dce6dd7056e9cbabb0e0127e07f8fcbe5e0b713918b"} Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.860878 4856 scope.go:117] "RemoveContainer" containerID="935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.861356 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvrgk" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.898466 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.904778 4856 scope.go:117] "RemoveContainer" containerID="585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.915994 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvrgk"] Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.928825 4856 scope.go:117] "RemoveContainer" containerID="1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.976601 4856 scope.go:117] "RemoveContainer" containerID="935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d" Dec 03 10:27:14 crc kubenswrapper[4856]: E1203 10:27:14.976947 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d\": container with ID starting with 935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d not found: ID does not exist" containerID="935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.976977 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d"} err="failed to get container status \"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d\": rpc error: code = NotFound desc = could not find container \"935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d\": container with ID starting with 935628871cb9efb81b696523f873576124c026f40cf264be60afc263f230f11d not found: ID does not exist" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.976997 4856 scope.go:117] "RemoveContainer" containerID="585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6" Dec 03 10:27:14 crc kubenswrapper[4856]: E1203 10:27:14.977304 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6\": container with ID starting with 585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6 not found: ID does not exist" containerID="585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.977369 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6"} err="failed to get container status \"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6\": rpc error: code = NotFound desc = could not find container \"585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6\": container with ID starting with 585a2d86f2e85a40094b68876e6920dab5bd1649194a55377163a73332aeecf6 not found: ID does not exist" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.977405 4856 scope.go:117] "RemoveContainer" containerID="1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2" Dec 03 10:27:14 crc kubenswrapper[4856]: E1203 10:27:14.977974 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2\": container with ID starting with 1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2 not found: ID does not exist" containerID="1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2" Dec 03 10:27:14 crc kubenswrapper[4856]: I1203 10:27:14.978001 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2"} err="failed to get container status \"1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2\": rpc error: code = NotFound desc = could not find container \"1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2\": container with ID starting with 1a1cfc70b304ba7340366a24819d6081c82901e183abacf6438618dbffe6c4e2 not found: ID does not exist" Dec 03 10:27:16 crc kubenswrapper[4856]: I1203 10:27:16.703451 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a96f76d-14cc-4326-8b07-77bc4fa32fd0" path="/var/lib/kubelet/pods/9a96f76d-14cc-4326-8b07-77bc4fa32fd0/volumes" Dec 03 10:27:40 crc kubenswrapper[4856]: I1203 10:27:40.517193 4856 scope.go:117] "RemoveContainer" containerID="5bfc276247a07d1327c8fba9b17472955e23019a7ff5e0c392cdf812e80d1adf" Dec 03 10:27:52 crc kubenswrapper[4856]: I1203 10:27:52.759653 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:27:52 crc kubenswrapper[4856]: I1203 10:27:52.760328 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:28:22 crc kubenswrapper[4856]: I1203 10:28:22.759762 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:28:22 crc kubenswrapper[4856]: I1203 10:28:22.760436 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:28:52 crc kubenswrapper[4856]: I1203 10:28:52.759432 4856 patch_prober.go:28] interesting pod/machine-config-daemon-gzk5w container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:28:52 crc kubenswrapper[4856]: I1203 10:28:52.760034 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:28:52 crc kubenswrapper[4856]: I1203 10:28:52.760093 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" Dec 03 10:28:52 crc kubenswrapper[4856]: I1203 10:28:52.760992 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae"} pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:28:52 crc kubenswrapper[4856]: I1203 10:28:52.761060 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerName="machine-config-daemon" containerID="cri-o://959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" gracePeriod=600 Dec 03 10:28:52 crc kubenswrapper[4856]: E1203 10:28:52.898999 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:28:53 crc kubenswrapper[4856]: I1203 10:28:53.900151 4856 generic.go:334] "Generic (PLEG): container finished" podID="3541a85a-a53e-472a-9323-3bdb8c844e1f" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" exitCode=0 Dec 03 10:28:53 crc kubenswrapper[4856]: I1203 10:28:53.900238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" event={"ID":"3541a85a-a53e-472a-9323-3bdb8c844e1f","Type":"ContainerDied","Data":"959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae"} Dec 03 10:28:53 crc kubenswrapper[4856]: I1203 10:28:53.900491 4856 scope.go:117] "RemoveContainer" containerID="f3e13995fc7e803643740bd43c31a2d4fd5759b36d36cc828e13d802389c83cf" Dec 03 10:28:53 crc kubenswrapper[4856]: I1203 10:28:53.901044 4856 scope.go:117] "RemoveContainer" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" Dec 03 10:28:53 crc kubenswrapper[4856]: E1203 10:28:53.901267 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:29:06 crc kubenswrapper[4856]: I1203 10:29:06.695121 4856 scope.go:117] "RemoveContainer" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" Dec 03 10:29:06 crc kubenswrapper[4856]: E1203 10:29:06.695951 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:29:19 crc kubenswrapper[4856]: I1203 10:29:19.689506 4856 scope.go:117] "RemoveContainer" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" Dec 03 10:29:19 crc kubenswrapper[4856]: E1203 10:29:19.690474 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:29:34 crc kubenswrapper[4856]: I1203 10:29:34.688991 4856 scope.go:117] "RemoveContainer" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" Dec 03 10:29:34 crc kubenswrapper[4856]: E1203 10:29:34.689780 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f" Dec 03 10:29:46 crc kubenswrapper[4856]: I1203 10:29:46.689598 4856 scope.go:117] "RemoveContainer" containerID="959c890bea85f29ddf10554346ca448a109c36b4b0fd0aec8695480ae347ddae" Dec 03 10:29:46 crc kubenswrapper[4856]: E1203 10:29:46.693141 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gzk5w_openshift-machine-config-operator(3541a85a-a53e-472a-9323-3bdb8c844e1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-gzk5w" podUID="3541a85a-a53e-472a-9323-3bdb8c844e1f"